We actually agree: even if the probability of successful coordination is only 10%, accepting inevitability makes it 0%. That difference matters enormously given the stakes. My argument isn't "coordination is definitely possible" but rather "believing it's impossible guarantees failure." When tech leaders say "AGI is inevitable," they're not describing reality; they're shaping it by discouraging attempts to coordinate. Human cloning hasn't happened because we maintain active resistance despite technical feasibility.
You're asking for credible paths with P > 0. I'm saying: knowing P with certainty is impossible, so accepting P = 1 narratives makes alternative paths invisible. The path emerges through trial and error, not before it.
No, they're describing reality. As I posted in another comment, progress in technology drops capital requirements for innovation. Even if there's global coordination to stop AGI development right now, progress in tech means that in 30 years someone in their basement could do what OpenAI is doing right now but with commodity hardware. Preventing this would require an oppressive regime controlling basic information technology and knowledge to an extent that isn't palatable to anyone.
As for "oppressive regime", we already do this for nuclear and biotech, and most people find it quite palatable! Nuclear materials are tightly controlled globally. Cloning humans is illegal almost everywhere. We've had the knowledge for both for decades, yet basement nukes and basement human clones aren't happening.
I'm not saying we should make it illegal, I'm just saying there are more gray areas than what's generally accounted for.
The idea that you can develop "good" information technology without enabling the creation of "bad" information technology is pure fantasy, and if your idea is actually that we could halt the progress of information technology wholesale, then that's laughable, sorry to say. Hence the inevitability.
> As for "oppressive regime", we already do this for nuclear and biotech, and most people find it quite palatable!
You mean we do this for raw materials that have inherent scarcity because they are somewhat rare, difficult to mine, and difficult to refine? And you think this natural scarcity is somehow comparable to natural abundance that follows from digital information that can be trivially copied at perfect fidelity?
Furthermore, you've misunderstood what was meant by "oppressive regime". The same technologies that allow us to email each other, make family photo albums and forecast the weather or the stock market are what also enable AI. There is no way in which to suppress AI without also suppressing these other benign uses that everyone enjoys and that enable considerable productivity. This is not comparable to the technology and raw materials for nuclear weapons.
> We've had the knowledge for both for decades, yet basement nukes and basement human clones aren't happening.
You seem awfully confident about declaring the non-existence of something that's inherently underground and thus difficult to measure.
But let's make the comparison of AI to cloning more apt: how confident would you be that cloning won't happen once the knowledge of how to construct artificial wombs is discovered? Now reconsider those probabilities about if such wombs also easy to construct with readily available materials. That's the reality of information technology.
2. Preventing the manifestation of physical objects is a lot easier than preventing the dissemination of pure information. AIs are easy to copy, easy to run, and can assist in their own creation, advancement and proliferation, and it only gets easier over time. For an apt analogy, consider a cloning lab where every clone that escaped was compelled to create their own cloning lab, and everything you needed could be bought at any corner store.
3. All cloning requires existing biological organisms to participate at various stages. You need not only the biologists on board, but also the surrogate that has to carry the fetus to term. What do you think will happen when artificial wombs become available?
2. Computers powerful enough to train AI are also physical objects, ones that consume gigantic amounts of power as well. Maybe some day we'll have computers that can train Claude on 50kW of power running in your pocket, but maybe not. There are fundamental limits to how much computation you can get per watt, and we're getting closer to them. So, preventing AI may be as simple as banning use of any computer cluster that consumes more than some wattage, say 1000kW, without government audit, while also banning research into more computationally efficient ways of doing AI.
3. This is not a real problem, since some biologists that are into cloning may have wombs of their own to gestate the clone. Artifical wombs, even cheap ones, would change nothing in relation to cloning (except maybe reduce the diversity of rogue cloning research teams - angering the criminal enterprise DEI department, I'm sure).
While I see what you are getting at, and I think its super important we come up with philosophical frameworks to push back on the central idea in question (ie, the moral hazard of "its gonna happen anyway so why not pour a little more into the river").... I think your writing/responses miss the central point.
As I see it, the fundamental issue with this essay, and your responses, is you keep conflating impossible with probability zero. People are saying "this is inevitable" to mean this has probability 1 of occurring, with basic game theory reasoning (its a giant iterative prisoners dilemna), and your response "but it's possible". Yes, with measure zero.
Telling us that such a path surely exists isn't useful. If you want to push back on "inevitability" you need to find a credible path with probability > 0 (which is not the same as impossible).