He even said it could be a gateway to actual programming
Prior to LLMs, it was amusing to consider how ML folks and software folks would talk passed each other. It was amusing because both sides were great at what they do, neither side understood the other side, and they had to work together anyway.
After LLMs, we now have lots of ML folks talking about the future of software, so ething previously established to be so outside their expertise that communication with software engineers was an amusing challenge.
So I must ask, are ML folks actually qualified to know the future of software engineering? Shouldnt we be listening to software engineers instead?
Probably not CRUD apps typical to back office or website software, but don't forget that ML folks come from the stock of people that built Apollo, Mars Landers, etc. Scientific computing shares some significant overlap with SWE, and ML is a subset of that.
IMHO, the average SWE and ML person are different types when it comes to how they cargocult develop, but the top 10% show significant understanding and re speed across domains.
They start with the code from another level, then modify it until it seems to do what they want. During the alpha testing phase, we'd have a programmer read through the code and remove all the useless cruft and fix any associated bugs.
In some sense that's what vibe coding with an AI is like if you don't know how to code. You have the AI make some initial set of code that you can't evaluate for correctness, then slowly modify it until it seems to behave generally like you want. You might even learn to recognize a few things in the code over time, at which point you can directly change some variables or structures in the code directly.
Some good nuggets in this talk, specifically his concept that Software 1.0, 2.0 and 3.0 will all persist and all have unique use cases. I definitely agree with that. I disagree with his belief that "anyone can vibe code" mindset - this works to a certain level of fidelity ("make an asteroids clone") but what he overlooks is his ability, honed over many years, to precisely document requirements that will translate directly to code that works in an expected way. If you can't write up a Jira epic that covers all bases of a project, you probably can't vibe code something beyond a toy project (or an obvious clone). LLM code falls apart under its own weight without a solid structure, and I don't think that will ever fundamentally change.
Where we are going next, and a lot of effort is being put behind, is figuring out exactly how to "lengthen the leash" of AI through smart framing, careful context manipulation and structured requests. We obviously can have anyone vibe code a lot further if we abstract different elements into known areas and simply allow LLMs to stitch things together. This would allow much larger projects with a much higher success rate. In other words, I expect an AI Zapier/Yahoo Pipes evolution.
Lastly, I think his concept of only having AI pushing "under 1000 line PRs" that he carefully reviews is more short-sighted. We are very, very early in learning how to control these big stupid brains. Incrementally, we will define sub-tasks that the AI can take over completely without anyone ever having to look at the code, because the output will always be within an accepted and tested range. The revolution will be at the middleware level.