I have done some romhacks, so I have seen what compilers have done to assembly quality and readability. When I hear programmers complain that having to debug AI written code is harder than just writing it yourself, that's probably exactly how assembly coders felt when they saw what compilers produce.
One can regret the loss of elegance and beauty while accepting the economic inevitability.
LLMs are nowhere close to that level today. It spits out a bunch of mediocre code that the programmer then needs to maintain and fix bugs on. And the larger / more complicated the code base the harder it is for the LLM to work with. There's a huge leaky abstraction here going from initial vibe coding to then having to dig into the weeds, and generally fixing bugs written by another human is difficult enough, not to mention written by some random LLM.
I think there is a difference here for the future of humanity that has never happened before in our tool making history.
There are a lot of innovations that helped us not do heavy thinking ourselves. Think calculators. We will just move to a higher level of magnitud problem to solve, software development is a means to an end, instead of thinking hard about coding we should be thinking hard about the problem being solved. That will be the future of the craft.
It's one of those problems that seems easy, but isn't. The issue seems to come out when we let an aid for process replace gaining the knowledge behind the process. You at least need to know what you don't know so you can develop an intuition for when outputs don't (or might not) make sense.
https://chadnauseam.com/coding/random/calculator-app
(recently: https://www.hackerneue.com/item?id=43066953)
It's especially silly because one thing calculators are known for is being inconsistent about order of operations between designs.
The value of most of us as coders will drop to zero but their skills will remain valuable for the foreseeable future. LLMs can't parrot out what's not in their training set.
> It’s really about basic human personal insecurity, and we all have that, to some degree. Getting around it, is a big part of growing up (...)
I believe he's right.
It makes me think back to my teenage years, when I first learned to program because I wanted to make games. Within the amateur gamedev community, we had this habit of sneering at "clickers" - Klik&Play & other kinds of software we'd today call "low-code", that let you make games with very little code (almost entirely game logic, and most of it "clicked out" in GUI), and near-zero effort on the incidental aspects like graphics, audio, asset management, etc. We were all making (or pretending to make) games within scope of those "clickers", but using such tools felt like cheating compared to doing it The Right Way, slinging C++ through blood, sweat and tears.
It took me over a decade to finally realize how stupid that perspective was. Sure, I've learned a lot; a good chunk of my career skills date back to those years. However, whatever technical arguments we levied against "clickers", most of them were bullshit. In reality, this was us trying to feel better, special, doing things The Hard Way, instead of "taking shortcuts" like those lazy people... who, unlike us, actually released some playable games.
I hear echoes of this mindset in a lot of "LLMs will rot your brain" commentary these days.
--
Humans are tool users. It is very difficult to pick a point in time and say, "it was here that we crossed the Rubicon". Was it the advent of spoken word? Written word? Fire? The wheel? The horse? The axe? Or in more modern times, the automobile, calculator, or dare I say the computer and the internet?
"With the arrival of electric technology, man has extended, or set outside himself, a live model of the central nervous system itself. To the degree that this is so, it is a development that suggests a desperate suicidal autoamputation, as if the central nervous system could no longer depend on the physical organs to be protective buffers against the slings and arrows of outrageous mechanism."
― Marshall McLuhan, Understanding Media: The Extensions of Man
Are we at battle with LLMs or with humanity itself?
AI generated code? That seems a way off from being a generalized solved problem in an iterative SDLC at a modern tech company trying to get leaner, disrupt markets, and survive in a complex world. I for one am very much in support of it for engineers with the unaided experience under their belt to judge the output, but the idea that we're potentially going to train new devs at unfamiliar companies on this stuff? Yikes.
Note: I'm not saying this is a conspiracy or a cabal - it's a local phenomenon happening everywhere. Lots of people with malicious intent, and even more indifferent to the fate of others, get to make decisions hurting others, and hide behind LLCs and behind tech and technologists, all of which get blamed instead.