- Hmmm, that doesn't seem right. I'm having a hard time finding an actual consumption number, but I am confident it's well below 50%.
The top 10% of households by wage income do receive ~50% of pre-tax wage income, but:
1) our tax system is progressive, so actual net income share is less
2) there's significant post-wage redistribution (social security/medicaid)
3) that high income households consume a smaller percent of their net income is a well established fact.
- The bus (and the subway) in NYC are also already heavily subsidized. There is also already heavily subsidized childcare in NYC (3k, preK).
The article in general takes the approach of listing a small handful of (usually very small) polities that have one of Mamdani's proposed policies, and then claim that the full suite is therefore "normal" across Europe.
- > We're going to stabilize around 10 billion by 2080 according to projections and then decline, hopefully reaching some kind of Star Trek utopia at some point.
10 billion is gonna be the high end by the looks of things, and that decline is going to be hardly conducive to utopia. The math of dependency ratios is inescapably painful.
- > I think it's more likely, drawing from biology, that we end up at a stable global population level without having to worry about moving backwards along the metrics of education, income or contraceptive access.
There's absolutely no inherent equilibrating force that will stabilize global fertility rates at replacement. Many countries have blown by replacement (the USA included) and continue on a downward trend year over year.
- Mentioned above: https://github.com/stepchowfun/typical
- I'd actually bet against this. The "bitter lesson" suggests doing things end-to-end in-model will (eventually, with sufficient data) outcompete building things outside of models.
My understanding is that GPT5 already does this by varying the quantity of CoT done (in addition to the kind of super-model-level routing described in the post), and I strongly suspect it's only going to get more sophisticated
- A part of "being a good developer" is being able to evolve systems in this direction. Real systems are messy, but you can and should be thoughtful about:
1. Progressively reducing the number of holes in your invariants
2. Building them such that there's a pit of success (engineers coming after you are aware of the invariants and "nudged" in the direction of using the pathways that maintain them). Documentation can help here, but how you structure your code also plays a part (and is in my experience the more important factor)
- If my understanding is correct, this is still a much worse deal for employees than if Windsurf's exec team had negotiated a "standard" "accelerated vesting, common conversion" acquisition with Google.
Presumably the "payout" from Cognition is at a lower nominal value and in illiquid (and IMO overvalued) shares in Cognition rather than cash.
- But for most human endeavors, "operational precision" is a useful implementation detail, not a fundamental requirement.
We want software to be operationally precise because it allows us to build up towers of abstractions without needing to worry about leaks (even the leakiest software abstraction is far more watertight than any physical "abstraction").
But, at the level of the team or organization that's _building_ the software, there's no such operational precision. Individuals communicating with each other drop down to such precision when useful, but at any endeavor larger than 2-3 people, the _vast_ majority of communication occurs in purely natural language. And yet, this still generates useful software.
The phase change of LLMs is that they're computers that finally are "smart" enough to engage at this level. This is fundamentally different from the world Dijkstra was living in.
- > All they have are relations between labels and how likely are they used together. But not what are the sets that those labels refer to and how the items in those sets interact.
Why would "membership in a set" not show up as a relationship between the items and the set?
In fact, it's not obvious to me that there's any semantic meaning not contained in the relationship between labels.
- Same reason humans use high-level languages: limited context windows.
Both humans and LLMs benefit from non-leaky abstractions—they offload low-level details and free up mental or computational bandwidth for higher-order concerns. When, say, implementing a permissioning system for a web app, I can't simultaneously track memory allocation and how my data model choices aligns with product goals. Abstractions let me ignore the former to "spend" my limited intelligence on the latter; same with LLMs and their context limits.
Yes, more intelligence (at least in part) means being able to handle larger contexts, and maybe superintelligent systems could keep everything "in mind." But even then, abstraction likely remains useful in trading depth for surface area. Chris Sawyer was brilliant enough to write Rollercoaster Tycoon in assembly, but probably wouldn't be able to do the same for Elden Ring.
(Also, at least until LLMs are so transcendentally intelligent they outstrip our ability to understand their actions, HLLs are much more verifiable by humans than assembly is. Admittedly, this might be a time-limited concern)
- I've never seen 8000 -> 40, but I have done ~10 kLoC -> ~600.
Aggggressively "You can write Java in any language" style JavaScript (`Factory`, `Strategy`, etc) plus a whole mini state machine framework that was replaceable with judicious use of iterators.
(This was at Google, and I suspected it was a promo project gone metastatic.)
- Hmm, this article is a little confusing. I'm not familiar with Vitess or Citus, but am familiar with "manually" sharded Postgres/Mysql, and I'm not sure I understand if there's any "interaction effects" of the decision to shard or not and the decision between MySQL/Postgres and Sqlite.
Like, the article's three sections are:
1. The challenges of sharding
2. The benefits of these new sharded Sqlite solutions over conventional Sqlite
3. A list conflating the benefits of SQL databases generally with the benefits of Sqlite
None of which answer the question of "Why should I use sharded Sqlite instead of, say, sharded Postgres, for hyperscale?".
- It's certainly true that the charismatic have a better go of this, but after 12 years in the industry I've built up a solid list of quietly excellent engineers. Whenever I see an opportunity they could shine, I reach out to them.
Fortunately for them (and unfortunately for me), the industry seems to be fairly market efficient, and they're usually already happy at some other highly compensated position (Empirically, 1 M$/yr seems roughly to be the going rate for "Damn, I really wish I could work with that person again")
- Pitch accent in Japanese is deterministic based on the mora that is "accented". While it's true the effect of this accent "spreads" across the entire word, you only need to mark a single mora to know the effects word-wide.
> Reading hiragana is slow (and I've been reading hiragana for a long time)- it's slow, and mentally much harder than reading with kanji.
What's the ratio of hiragana-only text that you read compared to Kanji? And does the hiragana text uses spaces between words? My strong suspicion is "low" and "no", respectively. Familiarity breeds comfort with any writing system, and word breaks are a fabulous ergonomic tool for easing reading.
(while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)