Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it!
I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code.
I never knew there was an entire subclass of people in my field who don't want to write code.
I want to write code.
* Being excited to be able to write the pieces of code they want, and not others. When you sit down to write code, you do not do everything from scratch, you lean on libraries, compilers, etc. Take the most annoying boilerplate bit of code you have to write now - would you be happy if a new language/framework popped up that eliminated it?
* Being excited to be able to solve more problems because the code is at times a means to an end. I don't find writing CSS particularly fun but I threw together a tool for making checklists for my kids in very little time using llms and it handled all of the css for printing vs on the screen. I'm interested in solving an optimisation issue with testing right now, but not that interested in writing code to analyse test case perf changes so the latter I got written for me in very little time and it's great. It wasn't really a choice of me or machine, I do not really have the time to focus on those tasks.
* Being excited that others can get the outcomes I've been able to get for at least some problems, without having to learn how to code.
As is tradition, to torture a car analogy, I could be excited for a car that autonomously drives me to the shops despite loving racing rally cars.
I personally don't like it when others who don't know how to code are able to get results using AI. I spent many years of my life and a small fortune learning scarce skills that everyone swore would be the last to ever be automated. Now, in a cruel twist of fate, those skills are being automated and there is seemingly no worthwhile job that can't be automated given enough investment. I am hopeful because the AI still has a long way to go, but even with the improvements it currently has, it might ultimately destroy the tech industry. I'm hoping that Say's Law proves true in this case, but even before the AI I was skeptical that we would find work for all the people trying to get into the software industry.
Those jobs still exist, but by large are either very niche or work using that tech in some way.
It is not wrong to feel down about the risk of so much time, training, etc rapidly losing value. But it also isn't wrong that change isn't bad, and sometimes that includes adjusting how we use our skills and/or developing new ones. Nobody gets to be elite forever, they will be replaced and become common or unneeded eventually. So it's probably more helpful for yourself and those that may want to rely on you to be forward-thinking rather than complaining. Doesn't mean you have to become pro-AI, but may be helpful to be pragmatic and work where it can't.
As to work supply... I figure that will always be a problem as long as money is the main point of work. If people could just work where they specialize without so much concern for issues like not starving, maybe it would be a different. I dunno.
Sounds like for many programmers AI is the new Visual Basic 6 :-P
AI is addressing that problem extremely well, but by putting up with it rather than actually solving it.
I don't want the boilerplate to be necessary in the first place.
There might have been people who were happy to write assembly that got bummed about compilers. This AI stuff judt feels like a new way to write code.
Inevitably AI will writes things in ways you don't intend. So now you have to prompt it to change and hope it gets it right. Oh, it didn't. Prompt it again and maybe this time will work. Will it get it right this time? And so on.
It's so good at a lot of things, but writing out whole features or apps in my experience seems good at first, but then it turns out to be a time sync of praying it will figure it out on this next prompt.
Maybe it's a skill issue for me, but I've gotten the most efficiency out of having it review code, pair with it on ideas and problems, etc. rather than actually writing the majority of code.
It is really like micro-managing a very junior very forgetful dev but they can read really fast (and they mostly remember what they read for a few minutes at least, they actually know more about something than you do if they have a manual about it on hand). Of course, if its just writing the code once, you don't bother with the junior dev and write the code yourself. But if you want long term efficiency, you put the time into your team (and team here is the AI).
Not everyone needs to be excited about LLMs, in the same way that C++ developers dont need to be excited about python.
I'm my mind, writing the prompt that generates the code is somewhat analogous to writing the code that generates the assembly. (Albeit, more stochastically, the way psychology research might be analogous to biochemistry research).
Different experts are still required at different layers of abstraction, though. I don't find it depressing when people show preference for working at different levels of complexity / tooling, nor excitement about the emergence of new tools that can enable your creativity to build, automate, and research. I think scorn in any direction is vapid.
The upshot is, you have to review everything the LLM generates, because you can't predict the qualities or failures of its output. (You cannot reason in advance about what qualities and failures it definitely will or will not exhibit.) This is different from, say, using a compiler, whose output you generally don't have to review, and whose input-to-output relation you can reason about with precision.
Note: I'm not saying that using an LLM for coding is not workable. I'm saying that it lacks what people generally like about regular coding, namely the ability to reason with absolute precision about the relation between the input and the behavior of the output.
How any of that comes down to an investment portfolio manager as writing "world class code" by LLMs is a mistery to me.
- inverse career growth structure and black hole effect
usually, an industry has a number of skills to hone, you start with simple ones, and as you go you may learn more to do harder, and earn more. the more you love, the more you learn, the better for you. this is evaporating.. and worse, the people who don't love it, get to run you over. you're now competing in the 'llm orchestration game' where the most mentally intense task is to chat with the cli and check its output.
llms may also be all encompassing, even if I adapt and accept that well software engineering is done for, i don't even foresee what i should learn now.. my brain thinking power is not that great, and the places where llms can't beat human are probably post-graduate intelligence and i can't compete much here either.
how i see it it's a middle layer collapse
Some people don't enjoy writing code and went into software development only because it's a well paid and a stable job. Now this trade is under the thread and they are happy to switch to prompting LLMs. I do like to code so use LLMs less then many my colleagues.
Though I don't expect to see many from this crowd in HM, instead I expect here to see entrepreneurs who need a product to sell and don't care if it is written by humans or by LLMs.
Maybe post renaissance many artists no longer had patrons, but nothing was stopping them from painting.
If your industry truely is going in the direction where there's no paid work for you to code (which is unlikely in my opinion), nobody is stopping you. It's easier than ever, you have decades of personal computing at your fingertips.
Most people with a thing they love do it as a hobby, not a job. Maybe you've had it good for a long time?
I could answer that nobody is forced to be a programmer. Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else.
Instead, I've reacted to the article from the opposite direction. All those grand claims about stuff this tech doesn't do and can't do. All that trying to validate the investment as rational when it's absolutely obvious it's at least 2 orders of magnitude larger than any arguably rational value.
You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. The question is, does it happen in two years or a hundred years? My motto: don't predict, prepare.
Lots of wiggle room between "never" or "usually". We're not all riding Segways or wearing VR goggles. Seems wiser to work on case-by-case basis here.
Really? Are you sure there isn't a lot of confirmation bias in this? Do you really have a good handle on 100-year-old tech hypes that didn't deliver? All I can think of is "flying everything".
Regardless of AI this has been years in the making. “Learn to code” has been the standard grinder cryptobro advice for “follow the money” for a while, there’s a whole generation of people getting into the industry for financial reasons (which is not wrong, just a big cultural shift).
Most of the world doesn’t care about “good code.” They care about “does it work, is it fast enough, is it cheap enough, and can we ship it before the competitor does?”
Beautiful architecture, perfect tests, elegant abstractions — those things feel deeply rewarding to the person who wrote them, but they’re invisible to users, to executives, and, let’s be honest, to the dating market.
Being able to refactor a monolith into pristine microservices will not make you more attractive on a date. What might is the salary that comes with the title “Senior Engineer at FAANG.” In that sense, many women (not all, but enough) relate to programmers the same way middle managers and VCs do: they’re perfectly happy to extract the economic value you produce while remaining indifferent to the craft itself. The code isn’t the turn-on; the direct deposit is.
That’s brutal to hear if you’ve spent years telling yourself that your intellectual passion is inherently admirable or sexy. It’s not. Outside our tribe it’s just a means to an end — same as accounting, law, or plumbing, just with worse dress code and better catering.
So when AI starts eating the parts of the job we insisted were “creative” and “irreplaceable,” the threat feels existential because the last remaining moat — the romantic story we told ourselves about why this profession is special — collapses. Turns out the scarcity was mostly the paycheck, not the poetry.
I’m not saying the work is meaningless or that system design and taste don’t matter. I’m saying we should stop pretending the act of writing software is inherently sexier or more artistically noble than any other high-paying skilled trade. It never was.
In my heart, I firmly believe in the ability of technology to uplift and improve humanity - and have spent much of my career grappling with the distressing reality that it also enables a handful of wealthy people to have near-total control of society in the process. AI promises a very hostile, very depressing, very polarized world for everyone but those pulling the levers, and I wish more people evaluated technology beyond the mere realm of Computer Science or armchair economics. I want more people to sit down, to understand its present harms, its potential future harms, and the billions of people whose lives it will profoundly and negatively impact under current economic systems.
It's equal parts sobering and depressing once you shelve personal excitement or optimism and approach it objectively. Regardless of its potential as a tool, regardless of the benefit it might bring to you, your work day, your productivity, your output, your ROI, I desperately wish more people would ask one simple question:
Is all of that worth the harm I'm inflicting on others?
Nothing is inevitable. Systems can be changed if we decide to do so, and AI is no different. To believe in inevitability is to embrace fatalism.
It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will.
But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else.
(I'd also argue that "understanding" and "functional brain" are unfalsifiable comparisons. What exactly distinguishes a functional brain from a turing machine? Chess once required a functional brain to play, but has now been surpassed by computation. Saying "jobs that require a human brain" is tautological without any further distinction).
Of course, LLMs are definitely missing plenty of brain skills like working in continuous time, with persistent state, with agency, in physical space, etc. But to say that an LLM "never will" is either semantic, (you might call it something other than an LLM when next generation capabilities are integrated), tautological (once it can do a human job, it's no longer a job that requires a human), or anthropocentric hubris.
That said, who knows what the time scale looks like for realizing such improvements – (decades, centuries, millennia).
Just look at who is building, funding, and promoting these models! I can't think of a group of people less interested in helping millions of plebs lead higher quality lives if it costs them a penny to do it.
This artificial creativity will only go so far, because it's a simulated semblance of human creativity, as much as could be gathered from training data. If not continually refueled by new training data, it will run out sooner or later. And then it will get boring really quickly.
https://www.youtube.com/watch?v=_zfN9wnPvU0
Drives people insane:
https://www.youtube.com/watch?v=yftBiNu0ZNU
And LLM are economically and technologically unsustainable:
https://www.youtube.com/watch?v=t-8TDOFqkQA
These have already proven it will be unconstrained if AGI ever emerges.
https://www.youtube.com/watch?v=Xx4Tpsk_fnM
The LLM bubble will pass, as it is already losing money with every new user. =3
Now imagine a different sort of company. A little shop where the owner's first priority is actually to create good jobs for their employees that afford a high quality life. A shop like that needn't worry about AI.
It is too bad that we put so much stock as a society in businesses operating in this dehumanizing capacity instead of ones that are much more like a family unit trying to provide for each other.
> This strikes me as paradoxical given my sense that one of AI’s main impacts will be to increase productivity and thus eliminate jobs.
The allegation that an "Increase of productivity will reduce jobs" has been proven false by history over and over again it's so well known it has a name, "Jevons Paradox" or "Jevons Effect"[0].
> In economics, the Jevons paradox (sometimes Jevons effect) occurs when technological advancements make a resource more efficient to use [...] results in overall demand increasing, causing total resource consumption to rise.
The "increase in productivity" does not inherently result in less jobs, that's a false equivalence. It's likely just as false as it was in 1915 with the the assembly line and the Model T as it is in 2025 with AI and ChatGPT. This notion persists because as we go through inflection points due to something new changing up market dynamics, there is often a GROSS loss (as in economics) of jobs that often precipitates a NET gain overall as the market adapts, but that's not much comfort to people that lost or are worried about losing their jobs due to that inflection point changing the market.
The two important questions in that context for individuals in the job market during those inflections points (like today) are: "how difficult is it to adapt (to either not lose a job, or to benefit from or be a part of that net gain)?" and "Should you adapt?" Afterall, the skillsets that the market demands and the skillsets it supplies are not objectively quantifiable things; the presence of speculative markets is proof that this is subjective, not objective. Anyone who's ever been involved in the hiring process knows just how subjective this is. Which leads me to:
> the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
Disagree that that's what the promise about. That IS happening, I don't disagree there, but that's not the promise that corporate is so hyped about. If we're being honest and not trying to blow smoke up people's ass to artificially inflate "value," AI is fundamentally about being more OBJECTIVE than SUBJECTIVE with regard to costs and resources of labor, and it's outputs. Anyone who knows what OKR's are and has been subject to a "performance review" in a self professed "Data driven company" knows how much modern corporate America, especially the tech market, loves it's "quantifiables." It's less about how much better it can allegedly do something, but the promise of how much "better" it can be quantified vs human labor. As long as AI has at least SOME proven utility (which it does), this promise of quantifiables combined with it's other inherent potential benefits (Doesn't need time off, doesn't sleep, doesn't need retirement/health benefits, no overtime pay, no regulatory limitations on hours worked, no "minimum wage") means that so long as the monied interests perceive it as continuing to improve, then they can dismiss it's inefficiencies/ineffectiveness in X or Y by the promise of it's potential to overcome that eventually.
It's the fundamental reason why people are so concerned about AI replacing Humans. Especially when you consider one of the things that AI excels at is quickly delivering an answer with confidence (people are impressed with speed and a sucker for confidence), and another big strength is it's ability to deal with repetitive minutia in known and solved problem spaces(a mainstay of many office jobs). It can also bullshit with best of them, fluff your ego as much as you want (and even when you don't), and almost never says "No" or "You're wrong" unless you ask it to.
In other words, it excels at the performative and repetitive bullshit and blowing smoke up your boss' ass and empowers them to do the same for their boss further up the chain, all while never once ruffling HR's feathers.
Again, it has other, much more practical and pragmatic utility too, it's not JUST a bullshit oracle, but it IS a good bullshit oracle if you want it to be.
I don't understand why people who seem to hate the modern world so much continue to live in it, and complain on the internet, when they have the option to live differently.
All it takes for evil to persevere is good people to sit by and do nothing. Don't like the situation you're in, do something about it. Preferably other than doomscrolling, but hey, you do you.
It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.