Preferences

Something about the way the article sets up the conversation nags at me a bit - even though it concludes with statements and reasoning I generally agree quite well with. It sets out what it wants to argue clearly at the start:

> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished... The Bun acquisition blows a hole in that story.”

But what the article actually discusses and demonstrates by the end of the article is how the aspects of engineering beyond writing the code is where the value in human engineers is at this point. To me that doesn't seem like an example of a revealed preference in this case. If you take it back to the first part of the original quote above it's just a different wording for AI being the code writer and engineering being different.

I think what the article really means to drive against is the claim/conclusion "because AI can generate lots of code we don't need any type of engineer" but that's just not what the quote they chose to set out against is saying. Without changing that claim the acquisition of Bun is not really a counterexample, Bun had just already changed the way they do engineering so the AI wrote the code and the engineers did the other things.


But the engineers can do it because they have written lots of code before. Where will these engineers get their experience in the future.

And what about vibe coding? The whole point and selling point of many AI companies is that you don’t need experience as a programmer.

So they sell something that isn’t true, it’s not FSD for coding but driving assistance.

> Where will these engineers get their experience in the future

The house of the feeble minded: https://www.abelard.org/asimov.php

These are all things I'd rather have seen the article set out to talk about as well, instead it opens up to disprove a statement saying AI can write the coding portion of the engineering problem by means of showing it being used that way with Bun to mean Anthropic must not actually think that.
I mean, it smells an AI slop article, so it's hard to expect much coherence.
I was thinking the same but it's like they only used AI to handle the editing or something because even throwing it into ChatGPT "how could this article be improved: ${article}" gives:

> Tighten the causal claim: “AI writes code → therefore judgment is scarce”

As one of the first suggestions, so it's not something inherent to whether the article used AI in some way. Regardless, I care less about how the article got written and more about what conclusions really make sense.

I guess y'all disagree?

> The Bun acquisition blows a hole in that story.

> That contradiction is not a PR mistake. It is a signal.

> The bottleneck isn’t code production, it is judgment.

> They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.

> Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands.

Not to mention the gratuitous italics-within-bold usage.

No no I agree: “No negotiations. No equity. No retention packages.”

I don’t know if HN has made me hyper-sensitized to AI writing, but this is becoming unbearable.

When I find myself thinking “I wonder what the prompt was they used?” while reading the content, I can’t help but become skeptical about the quality of the thinking behind the content.

Maybe that’s not fair, but it’s the truth. Or put differently “Fair? No. Truthful? Yes.”. Ugh.

This item has no comments currently.