Preferences

akavi
Joined 3,066 karma

  1. You are aware that insofar as AI chat apps are "hallucinatory text generator(s)", then so is Google Translate, right?

    (while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)

  2. Hmmm, that doesn't seem right. I'm having a hard time finding an actual consumption number, but I am confident it's well below 50%.

    The top 10% of households by wage income do receive ~50% of pre-tax wage income, but:

    1) our tax system is progressive, so actual net income share is less

    2) there's significant post-wage redistribution (social security/medicaid)

    3) that high income households consume a smaller percent of their net income is a well established fact.

  3. The bus (and the subway) in NYC are also already heavily subsidized. There is also already heavily subsidized childcare in NYC (3k, preK).

    The article in general takes the approach of listing a small handful of (usually very small) polities that have one of Mamdani's proposed policies, and then claim that the full suite is therefore "normal" across Europe.

  4. PD's been tolerant to total AZ failures for years (was an early eng there)
  5. > We're going to stabilize around 10 billion by 2080 according to projections and then decline, hopefully reaching some kind of Star Trek utopia at some point.

    10 billion is gonna be the high end by the looks of things, and that decline is going to be hardly conducive to utopia. The math of dependency ratios is inescapably painful.

  6. > I think it's more likely, drawing from biology, that we end up at a stable global population level without having to worry about moving backwards along the metrics of education, income or contraceptive access.

    There's absolutely no inherent equilibrating force that will stabilize global fertility rates at replacement. Many countries have blown by replacement (the USA included) and continue on a downward trend year over year.

  7. I'd actually bet against this. The "bitter lesson" suggests doing things end-to-end in-model will (eventually, with sufficient data) outcompete building things outside of models.

    My understanding is that GPT5 already does this by varying the quantity of CoT done (in addition to the kind of super-model-level routing described in the post), and I strongly suspect it's only going to get more sophisticated

  8. It's evil to sell a product for a price higher than you, personally, want to pay?
  9. A zero-sum mindset on a website dedicated to programming of all places? Where we literally create wealth out of nothing but coffee and the strength of our minds?

    My love for my country means I want it to be the greatest in the world. Waterloo grads make America better. Period.

  10. As an American citizen, born and bred, I would literally, physically fight you on behalf of keeping Waterloo grads in America.

    Many of the best coworkers I've had the pleasure of working with, not to mention the founders of the company I spent over a quarter of my career at (Pagerduty).

  11. Is PHP more performant? That'd be surprising to me, given how many eng hours have been invested in V8
  12. Is there any evidence that _8_ years of post-secondary education (plus 3-7 years of residency at poverty wages) actually improves medical outcomes?

    5 years could be plenty?

  13. A part of "being a good developer" is being able to evolve systems in this direction. Real systems are messy, but you can and should be thoughtful about:

    1. Progressively reducing the number of holes in your invariants

    2. Building them such that there's a pit of success (engineers coming after you are aware of the invariants and "nudged" in the direction of using the pathways that maintain them). Documentation can help here, but how you structure your code also plays a part (and is in my experience the more important factor)

  14. If my understanding is correct, this is still a much worse deal for employees than if Windsurf's exec team had negotiated a "standard" "accelerated vesting, common conversion" acquisition with Google.

    Presumably the "payout" from Cognition is at a lower nominal value and in illiquid (and IMO overvalued) shares in Cognition rather than cash.

  15. Is this purely the rump company left over from the Google pseudo-acquistion? Or does this mean that deal fell through?

    Does this represent confirmation that there was no pro-rata compensation to common share holders in the Google deal?

    I just have so many questions.

  16. L5 ("Senior") at any FAANG co, L6 ("Staff") at pretty much any VC-backed startup in the bay.
  17. But for most human endeavors, "operational precision" is a useful implementation detail, not a fundamental requirement.

    We want software to be operationally precise because it allows us to build up towers of abstractions without needing to worry about leaks (even the leakiest software abstraction is far more watertight than any physical "abstraction").

    But, at the level of the team or organization that's _building_ the software, there's no such operational precision. Individuals communicating with each other drop down to such precision when useful, but at any endeavor larger than 2-3 people, the _vast_ majority of communication occurs in purely natural language. And yet, this still generates useful software.

    The phase change of LLMs is that they're computers that finally are "smart" enough to engage at this level. This is fundamentally different from the world Dijkstra was living in.

  18. > All they have are relations between labels and how likely are they used together. But not what are the sets that those labels refer to and how the items in those sets interact.

    Why would "membership in a set" not show up as a relationship between the items and the set?

    In fact, it's not obvious to me that there's any semantic meaning not contained in the relationship between labels.

  19. Same reason humans use high-level languages: limited context windows.

    Both humans and LLMs benefit from non-leaky abstractions—they offload low-level details and free up mental or computational bandwidth for higher-order concerns. When, say, implementing a permissioning system for a web app, I can't simultaneously track memory allocation and how my data model choices aligns with product goals. Abstractions let me ignore the former to "spend" my limited intelligence on the latter; same with LLMs and their context limits.

    Yes, more intelligence (at least in part) means being able to handle larger contexts, and maybe superintelligent systems could keep everything "in mind." But even then, abstraction likely remains useful in trading depth for surface area. Chris Sawyer was brilliant enough to write Rollercoaster Tycoon in assembly, but probably wouldn't be able to do the same for Elden Ring.

    (Also, at least until LLMs are so transcendentally intelligent they outstrip our ability to understand their actions, HLLs are much more verifiable by humans than assembly is. Admittedly, this might be a time-limited concern)

  20. I've never seen 8000 -> 40, but I have done ~10 kLoC -> ~600.

    Aggggressively "You can write Java in any language" style JavaScript (`Factory`, `Strategy`, etc) plus a whole mini state machine framework that was replaceable with judicious use of iterators.

    (This was at Google, and I suspected it was a promo project gone metastatic.)

  21. I sincerely believe this thesis and desperately want a REIT that owns real estate in a 2 hour radius of major urban downtowns.
  22. Hmm, this article is a little confusing. I'm not familiar with Vitess or Citus, but am familiar with "manually" sharded Postgres/Mysql, and I'm not sure I understand if there's any "interaction effects" of the decision to shard or not and the decision between MySQL/Postgres and Sqlite.

    Like, the article's three sections are:

    1. The challenges of sharding

    2. The benefits of these new sharded Sqlite solutions over conventional Sqlite

    3. A list conflating the benefits of SQL databases generally with the benefits of Sqlite

    None of which answer the question of "Why should I use sharded Sqlite instead of, say, sharded Postgres, for hyperscale?".

  23. It's certainly true that the charismatic have a better go of this, but after 12 years in the industry I've built up a solid list of quietly excellent engineers. Whenever I see an opportunity they could shine, I reach out to them.

    Fortunately for them (and unfortunately for me), the industry seems to be fairly market efficient, and they're usually already happy at some other highly compensated position (Empirically, 1 M$/yr seems roughly to be the going rate for "Damn, I really wish I could work with that person again")

  24. As an alternate datapoint, I've almost solely (by a 90% majority or more) heard "sequel" and "sequelite" (/'siː.kwəl/ and /'siː.kwəˌlaɪt/, respectively) across 12 years and 6 companies (from FAANG to YC startups in SF and NYC)
  25. When was this? MIT's financial aid was already very generous when I was applying (in 2008); IIRC the no-tuition threshold was 100 k$ back then
  26. I'm a native english speaker, and prompt is pronounced "promt" ( /prɒmt/ in my roughly General American accent). Ie, there is a silent "p", but it's the second one, not the first.
  27. It's not. If my work (as a software developer) can be replaced more cheaply by a machine, it should be.

    I'm still quite a bit better than SotA models, but I imagine that won't be true in 2034.

  28. Pitch accent in Japanese is deterministic based on the mora that is "accented". While it's true the effect of this accent "spreads" across the entire word, you only need to mark a single mora to know the effects word-wide.

    > Reading hiragana is slow (and I've been reading hiragana for a long time)- it's slow, and mentally much harder than reading with kanji.

    What's the ratio of hiragana-only text that you read compared to Kanji? And does the hiragana text uses spaces between words? My strong suspicion is "low" and "no", respectively. Familiarity breeds comfort with any writing system, and word breaks are a fabulous ergonomic tool for easing reading.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal