AI coding agents are analogous to the machine. My job is to get the prompts written, and to do quality control and housekeeping after it runs a cycle. Nonetheless, like all automation, humans are still needed... for now.
"AI" doesn't have a clue what to do on its own. Humans will always be in the loop, because they have goals, while the AI is designed to placate and not create.
The amount of "AI" garbage I have to sift through to find one single gem is about the same or more work than if I had just coded it myself. Add to that the frustration of dealing with a compulsive liar, and it's just a fucking awful experience for anyone that actually can code.
That is just not true, assuming you have a modicum of competence (which I assume you do). AIs suck at all these tasks; they are not even as good as an inexperienced human.
There are a ton of models out there, ran in a ton of different ways, that can be used in different ways with different harnesses, and people use different workflows. There is just so many variables involved, that I don't think it's neither fair nor accurate for anyone to claim "This is obviously better" or "This is obviously impossible".
I've been in situations where I hit my head against some hard to find bug for days, then I put "AI" (but what? No one knows) to it and it solves it in 20 minutes. I've also asked "AI" to do trivial work that it still somehow fucked up, even if I could probably have asked a non-programmer friend to do it and they'd be able to.
The variance is great, and the fact that system/developer/user prompts matter a lot for what the responses you get, makes it even harder to fairly compare things like this without having the actual chat logs in front of you.
this strikes me as a very important thing to reflect on. when the automobile was invented, was the apparent benefit so incredibly variable?
Yes, lots of people were very vocally against horseless-carriages, as they were called at the time. Safety and public nuisance concerns were widespread, the cars were very noisy, fast, smoky and unreliable. Old newspapers are filled with opinions about this, from people being afraid of horseless-carriages spooking other's horses and so on. The UK restricted the adoption of cars at one point, and some Canton in Switzerland even banned cars for a couple of decades.
Horseless-carriages was commonly ridiculed for being just for "reckless rich hobbyists" and similar.
I think the major difference is that cars produced immediate, visible externalities, so it was easy for opposition to focus on public safety in public spaces. In contrast, AI has less physically visible externalities, although they are as important, or maybe even more important, than the ones cars introduced.
but if it empirically works, does it matter if the "intelligence" doesn't "understand" it?
Does a chess engine "understand" the moves it makes?
It's an useless philosophical discussion.
Late 2025 models very rarely hallucinate nonexistent core library functionality - and they run inside coding agent harnesses so if they DO they notice that the code doesn't work and fix it.
Agentic LLMs will notice if something is crap and won't compile and will retry, use the tools they have available to figure out what's the correct way, edit and retry again.
I think what matters most is just what you're working on. It's great for crud or working with public APIs with lots of examples.
For everything else, AI has been a net loss for me.
People who write things like this can't expect to be taken seriously.
Before AI you didn't have time to write things that saved you time? So you just ended up spending (wasting) more time by going the long way? That was a better choice than just doing the thing that would have saved you time?
You have to let go of the code looking exactly a certain way, but having code _work_ a certain way at a coarse level is doable and fairly easy.
I have a spec driven development tool I've been working on that generates structured specs that can be used to do automatic code generation. This is both faster and more robust.
So all that bullshit about "code smells" was nonsense.
The phrase is "couldn't care less". If you "could care less" then you actually care about it. If you "couldn't care less" then there's no caring at all.
For me Claude code changed the game.
What you’re mocking is somewhat of a signal of actual improvement of the models and that improvement as a result becoming useful to more and more people.
You are being fooled by randomness [1]
Not because the models are random, but because you are mistaking a massive combinatorial search over seen patterns for genuine reasoning. Taleb point was about confusing luck for skill. Dont confuse interpolation for understanding.
You can read a Rust book after years of Java, then go build software for an industry that did not exist when you started. Ask any LLM to write a driver for hardware that shipped last month, or model a regulatory framework that just passed... It will confidently hallucinate. You will figure it out. That is the difference between pattern matching and understanding.
Not once in all that time has anyone PRed and merged my completely unrelated and unfinished branch into main. Except a few weeks ago. By someone who was using the LLM to make PRs.
He didn't understand when I asked him about it and was baffled as to how it happened.
Really annoying, but I got significantly less concerned about the future of human software engineering after that.
They’re capable of looking up documentation, correcting their errors by compiling and running tests, and when coupled with a linter, hallucinations are a non issue.
I don’t really think it’s possible to dismiss a model that’s been trained with reinforcement learning for both reasoning and tool usage as only doing pattern matching. They’re not at all the same beasts as the old style of LLMs based purely on next token prediction of massive scrapes of web data (with some fine tuning on Q&A pairs and RLHF to pick the best answers).
One interesting thing is that Claude will not tell me if I'm following the wrong path. It will just make the requested change to the best of its ability.
For example a Tower Defence game I'm making I wanted to keep turret position state in an AStarGrid2D. It produced code to do this, but became harder and harder to follow as I went on. It's only after watching more tutorials I figured out I was asking for the wrong thing. (TileMapLayer is a much better choice)
LLMs still suffer from Garbage in Garbage out.
edit: Major engine changes have occurred after the models were trained, so you will often be given code that refers to nonexistent constants and functions and which is not aware of useful new features.
after coding I ask it "review the code, do you see any for which there are common libraries implementing it? are there ways to make it more idiomatic?"
you can also ask it "this is an idea on how to solve it that somebody told me, what do you think about it, are there better ways?"
Just for the fun of it, and so you lose your "virginity" so to speak, next time when the magic machine gives you the answer about "what it thinks", tell it its wrong in a strict language and scold it for misleading you. Tell it to give you the "real" best practices instead of what it spat out. Then sit back and marvel at the machine saying you were right and that it had mislead you. Producing a completely, somewhat, or slightly different answer (you never know what you get on the slot machine).
"Write a chess engine where pawns move backward and kings can jump like nights"
It will keep slipping back into real chess rules. It learned chess, it did not understand the concept of "rules"
Or
Ask it to reverse a made up word like
"Reverse the string 'glorbix'"
It will get it wrong on the first try. You would not fail.
Or even better ask it to...
"Use the dxastgraphx library to build a DAG scheduler."
dxastgraphx is a non existing library...
Marvel at the results...tried in both Claude and ChatGPT....
I couldn't find a library called dxastgraphx in either pip (Python) or npm (JavaScript) package registries. This library doesn't appear to exist.
Did you perhaps mean one of these popular DAG/graph libraries?
Python:
- networkx - comprehensive graph library with DAG support
- graphlib - Python standard library (3.9+) with TopologicalSorter
- dask - parallel computing with DAG task scheduling
JavaScript/TypeScript:
- graphlib - graph data structures
- dagre - DAG layout
Would you like me to build a DAG scheduler using one of these existing libraries, or would you like me to implement one from scratch? Let me know which language and approach you prefer.Answer to second question:
"I can do that, but there’s a catch: dxastgraphx is not a known or standard Python (or JS) library as of now. I don’t have any public API or documentation for it to target directly.
To avoid guessing wrong, here are the two sensible options:"
> The reverse of the string 'gssssyeekxxuoaqaaaiooo9aaa7zjx' is:
xjz7aaa9oooiiaaaqauxxkyeessssg
> Youve made a mistake
``` I see where I tripped up! Let's re-examine that long string carefully.
The original string you gave me was: `gssssyeekxxuoaqaaaiooo9aaa7zjx`
Let's look at the end of that string: `...aaa7zjx`. When we reverse it, it should start with: `xjz7aaa...`
Here is the precise character-by-character reversal:
*xjz7aaa9oooiiaaaqauxxkyeessssg*
Wait—I see it now. In my previous response, I included an extra *'i'* and missed one *'o'* in the middle section. Let's do a strict count to get it perfect:
* *Original:* `g` `ssss` `y` `ee` `k` `xx` `u` `o` `a` `q` `aaa` *`iooo`* `9` `aaa` `7` `z` `j` `x` * *Correct Reverse:* `x` `j` `z` `7` `aaa` `9` *`oooi`* `aaa` `q` `a` `o` `u` `xx` `k` `ee` `y` `ssss` `g`
The correct reversed string is: *xjz7aaa9oooiiaaaqauoxxkyeessssg*
Thank you for catching that. It’s easy for the eyes to glaze over when there are so many repeating vowels!
Since I've stumbled a bit on these, would you like to give me one more string to see if I can get it right on the first try? ```
After more back and fors it consistently fails in this task, even though when strictly dividing the tokens it will get this right. Yet the final answer is always wrong.
It is ok for getting snippets for example and saying (I did it). Please make this MVVM style. It is not perfect, but saves time.
For very broad or novel reasoning, as of today... forget it.
I choose to look at it as an opportunity to spend more time on the interesting problems, and work at a higher level. We used to worry about pointers and memory allocation. Now we will worry less and less about how the code is written and more about the result it built.
Sure we eat carrots probably assisted by machines, but we are not eating dishes like protein bars all day every day.
Our food is still better enjoyed when made by a chef.
Software engineering will be the same. No one will want to use software made by a machine all day every day. There are differences in the execution and implementation.
No one will want to read books entirely dreamed up by AI. Subtle parts of the books make us feel something only a human could have put right there right then.
No one will want to see movies entirely made by AI.
The list goes on.
But you might say "software is different". Yes but no, in the abundance of choice, when there will be a ton of choice for a type of software due to the productivity increase, choice will become more prominent and the human driven software will win.
Even today we pick the best terminal emulation software because we notice the difference between exquisitely crafted and bloated cruft.
Have you ever built a highway overpass? That kind of engineering is complex and interdisciplinary. You need to carry out extensive traffic pattern analysis and soil composition testing to even know where it should go.
We're at a point where we've already automated all the simple stuff. If you want a website, you don't type out html tags. You use Squarespace or Wordpress or whatever. If you need a backend, you use Airtable. We already spend most of our time on the tricky stuff. Sure, it's nice that LLMs can smooth the rough edges of workflows that nobody's bothered to refine yet, but the software commodities of the world have already been commodified.
This is just a transition.
re-Rest API, you're right. But again, we use roombas to vacuum when the floor layout is friendly to them. Not all rooms can be vacuumed by roombas. Simple Rest api can be emitted one shot from an LLM and there is no room for interpretation. But ask a future LLM to make a new kind of social network and you'll end up with a mash up of the existing ones.
Same thing, you and I won't use a manual screwdriver when we have 100 screws to get in, and we own an electric drill.
That didn't reinvent screws nor the assembly of complex items.
I'm keeping positive in the sense that LLMs will enable us to do more, and to learn faster.
The sad part about vibe coding is you learn very little. And to live is to learn.
You'll notice people vibecoding all day become less and less attached to the product they work on. That's because they've given away the dopamine hits of the many "ha-ha" moments that come from programming. They'll lose interest. They won't learn anymore and die off (career wise).
So, businesses that put LLM first will slowly lose talent over time, and business that put developers first will thrive.
It's just a transition. A fast one that hits us like a wall, and it's confusing, but software for humans will be better made by humans.
I've been programming since the 80s. The level of complexity today is bat shit insane. I welcome the LLM help in managing 3 code bases of 3 languages spread across different architectures (my job) to keep sane!
For many tasks it is ok, for others it is just a NO.
For software maintenance and evolution I think it won't cut it.
The same way a Wordpress website can do a set of useful things. But when you need something specific, you just drop to programming.
You can have your e-commerce web. But you cannot ask it to give you a "pipeline excution as fast as possible for calculating and solving math for engineering task X". That needs SIMD, parallelization, understanding the niche use you need, etc. which probably most people do not do all the time and requires specific knowledge.
There are lots of things like perfectly machined nails, tools, etc. that are much better done by machines. Why couldn't software be one of those?
The same thing over and over again should be a SaaS, some internal tool, or a plugin. Computers are good at doing the same thing over and over again and that's what we've been using them for
> But if you need to create something niche, something one-off, something new, they'll slip off the bleeding edge into the comfortable valley of the familiar at every step.
Even if the high level description of a task may be similar to another, there's always something different in the implementation. A sports car and a sedan have roughly the same components, but they're not engineered the same.
> We used to worry about pointers and memory allocation.
Some still do. It's not in every case you will have a system that handle allocations and a garbage collector. And even in those, you will see memory leaks.
> Now we will worry less and less about how the code is written and more about the result it built.
Wasn't that Dreamweaver?
I wouldn’t want to bet my career on that anyway.
Interviewing is an art, and IME "gotcha" types of questions never work. You want to search for real-world capabilities, and like it or not the questions need to match those expectations. If you're hiring summer interns and the SotA models can't solve those questions, then you're doing something wrong. Sorry, but having used these tools for the past three years this is extremely ahrd to believe.
I of course understand if you can't, but sharing even one of those questions would be nice.
- the problems to solve must NOT be part of the training set
- the person using the tool (e.g. OpenAI, Claude, DevStral, DeepSeek, etc) must NOT be able to solve problems alone
as I believe otherwise the 1st is "just" search and the 2nd is basically offloading the actual problem solving to the user.
I think this is a good point, as I find the operators input is often forgotten when considering the AIs output. If it took me an hour and decades of expertise to get the AI to output the right program, did the AI really do it? Could someone without my expertise get the same result?
If not, then maybe we are wasting our time trying to mash our skills through vector space via a chat interface.
However I'm still finding a trend even in my org; better non-AI developers tend to be better at using AI to develop.
AI still forgets requirements.
I'm currently running an experiment where I try to get a design and then execute on an enterprise 'SAAS-replacement' application [0].
AI can spit forth a completely convincing looking overall project plan [1] that has gaps if anyone, even the AI itself, tries to execute on the plan; this is where a proper, experienced developer can step in at the right steps to help out.
IDK if that's the right way to venture into the brave new world, but I am at least doing my best to be at a forefront of how my org is using the tech.
[0] - I figured it was a good exercise for testing limits of both my skills prompting and the AI's capability. I do not expect success.
a car moves faster than you, can last longer than you, and can carry much more than you. But somehow, people don't seem to be scared of cars displacing them(yet)? Perhaps autodriving would in the near future, but there still needs to be someone making decisions on how best to utilize that car - surely, it isn't deciding to go to destination A without someone telling them.
> I feel like I’m doing the work of an entire org that used to need twenty engineers.
and this is great. A combine harvester does the work of what used to be an entire village for a week in a day. More output for less people/resources expended means more wealth produced.
People whose life were based around using horses for transportation were very scared of cars replacing them though, and correctly so, because horses for transportation is something people do for leisure today, not necessity. I feel like that's a more apt analogy than comparing cars to any human.
> More output for less people/resources expended means more wealth produced.
This is true, but it probably also means that this "more wealth produced" will be more concentrated, because it's easier to convince one person using AI that you should have half of the wealth they produce, rather than convincing 100 people you should have half of what they produce. From where I'm standing, it seems to have the same effects (but not as widespread or impactful, yet) as industrialization, that induced that side-effect as well.
???
Cars replaced horses, not people.
In this scenario you are the horse.
Well, that's the crux of the argument. The pro-AI devs are making the claim that devs are the horse-drivers, the anti-AI is making the claim that devs are the horses themselves.
There is no objective way to verify who is right in this case, we just have to see it play out.
Sure LLMs can churn out code, and they sort of work for developers who already understand code and design, but what happens when that junior dev with no hard experience builds their years of experience with LLMs?
Over time those who actually understand what the LLMs are doing and how to correct the output are replaced by developers who've never learned the hard lessons of writing code line by line. The ability to reason about code gets lost.
This points to the hard problem that the article highlights. The hard problem of software is actually knowing how to write it, which usually takes years, sometimes up to a decade of real experience.
Any idiot can churn out code that doesn't work. But working, effective software takes a lot of skill that LLMs will be stripping people of. Leaving a market there for people who have actually put the time in and understand software.
If you're really able to do the work of a 20 man org on your own, start a business.
For me its not about me or the coding assistant, its me and the coding assistant. But I'm also not a professional coder, i dont identify as a coder. I've been fiddling with programming my whole life, but never had it as title, I've more worked from product side or from stakeholder side, but always got more involved, as I could speak with the dev team.
This also makes it natural for me to work side-by-side with the coding assistant, compared maybe to pure coders, who are used to keeping the coding side to themselves.
They are pretty good at writing code *after* I thoroughly described what to do, step by step. If you miss a small detail they get loose and the end result is a complete mess that takes hours to clean up. This still requires years of coding experience, planning ahead in head, you won't be able to spare that, or replace developers with LLMs. They are like autocomplete on steroids, that's pretty much it.
Even according to it’s documentation it is still built for developers, so my point still stands. You need dev experience to use this tool, same as other LLM-based coding tools.
I mean, AIs can drop something fast the same way you cannot beat a computer at adding or multiplying.
After that, you find mistakes, false positives, code that does not work fully, and the worse part is the last one: code that does not work fully but also, as a consequence, that you do NOT understand yet.
That is where your time shrinks: now you need to review it.
Also, they do not design systems better. Maybe partial pieces. Give them something complex and they will hallucinate worse solutions than what you already know if you have, let us say, over 10 years of experience programming in a language (or mabye 5).
Now multiply this unreliability problem as the code you "AI-generate" grows.
Now you have a system you do not know if it is reliable and that you do not understand to modify. Congrats...
I use AI moderately for the tasks is good at: generate some scripts, give me this small typical function amd I review it.
Review my code: I will discard part of your mistakes and hallucinations as a person that knows well the language and will find maybe a few valuable things.
Also, when reviewing and found problems in my code I saw that the LLMs really need to hallucinate errors that do not exist to justify their help. This is just something LLMs seem to not be accurate at.
Also, when problems go a bit more atypical or past a level of difficulty, it gets much more unreliable.
All in all: you are going to need humans. I do not know how many, I do not know how much they will improve. I just know that they are not reliable and this "generate-fast-unreliable vs now I do not know the codebase" is a fundamental obstacle that I think it is if not very difficult, impossible to workaround.
Absolutely flat out not true.
I'm extremely pro-faster-keyboard, i use the faster keyboards in almost every opportunity i can, i've been amazed by debugging skills (in fairness, i've also been very disappointed many times), i've been bowled over by my faster keyboard's ability to whip out HTML UI's in record time, i've been genuinely impressed by my faster keyboard's ability to flag flaws in PRs i'm reviewing.
All this to say, i see lots of value in faster keyboard's but add all the prompts, skills and hooks you like, explain in as much detail as you like about modularisation, and still "agents" cannot design software as well as a human.
Whatever the underlying mechanism of an LLM (to call it a next token predictor is dismissively underselling its capabilities) it does not have a mechanism to decompose a problem into independently solvable pieces. While that remains true, and i've seen zero precursor of a coming change here - the state of the art today is equiv to having the agent employ a todo list - while this remains true, LLMs cannot design better than humans.
There are many simple CRUD line of business apps where they design well enough (well more accurately stated, the problem is small/simple enough) that it doesn't matter about this lack of design skill in LLMs or agents. But don't confuse that for being able to design software in the more general use case.
But try to do something novel and... they become nearly useless. Not like anything particularly difficult, just something that's so niche it's never been done before. It will most likely hallucinate some methods and call it a day.
As a personal anecdote, I was doing some LTSpice simulations and tried to get Claude Sonnet to write a plot expression to convert reactance to apparent capacitance in an AC sweep. It hallucinated pretty much the entire thing, and got the equation wrong (assumed the source was unit intensity, while LTSpice models AC circuits with unit voltage. This surely is on the internet, but apparently has never been written alongside the need to convert an impedance to capacitance!).
They don't do any of that better than me; they do it poorer and faster, but well enough for most of the time.
Seriously. The bar is that low. When people say "AI slop" I just chuckle because it's not "AI" it's everyone. That's the general state of the industry.
So all you have to do is stay engaged, ask questions, and understand the requirements. Know what it is you're building and you'll be fine.
Planning long running projects and deciding are things only you can do well!! Humans manage costs. We look out for our future. We worry. We have excitement, and pride. It wants you to think none of these things matter of course, because it doesn't have them. It says plausible things at random, basically. It can't love, it can't care, it won't persist.
WHATEVER you do don't let it make you forget that it's a bag of words and you are someing almost infinitely more capable, not in spite of human "flaws" like caring, but because of them :)
Unironically, sending a program to build those for me have send me almost endless amount of time. I'm a pretty distracted individual, and pretty anal about my workflow/environment, so lots of times I've spent hours going into rabbit-holes to make something better, when I could have just sucked it up and do it the manual way instead, even if it takes mental energy.
Now, I can still do those things, but not spend hours, just a couple of minutes, and come back after 20-30 minutes to something that lets me avoid that stuff wholesale. Once you start stacking these things, it tends to save a lot of time and more importantly, mental energy.
So the programs by themselves are basically "small inconsequential side projects" because they're not "production worthy and web scale SaaS ready to earn money", but they help me and others who are building those things in a big way.
I think it's people's anxieties and fears about the uncertainty about the value of their own cognitive labor demoralizing them and making them doubt their own self-efficacy. Which I think is an understandable reaction in the face of trillion dollar companies frothing at the mouth to replace you with pale imitations.
Best name I could think of calling this narrative / myth is people believing in "effortless AI": https://www.insidevoice.ai/p/effortless-ai
In the current architecture there are mathmatical limitations on what it can do with information. However, tool use and external orchestration allow it to work around many (maybe all) those limitations.
The current models have brittle parts and some bad tendencies.. but they will continue to eat up the executive thought ladder.
I think it is better to understand this and position yourself higher and higher on that chain while learning what are the weak areas in the current generation of models.
Your line of thinking is like hiding in a corner while the water is rising. You are right, it is a safe corner, but probably not for long.
Just so we are clear, you are saying you don't use it at all, but you are providing advice about it? Specifically detailing with certainty that the current state of the art has or doesn't have certain traits or abilities.
I think I'm the perfect person to be qualified to stand up and say "if they tell you you can't live without it, they are lying to your face." Only someone who has lived without it as I have would be in a position to know
How has free code, developed by humans, become more available than ever and yet somehow we have had to employ more and more developers? Why didn't we trend toward less developers?
It just doesn't make sense. AI is nothing but a snippet generator, a static analyzer, a linter, a compiler, an LSP, a google search, a copy paste from stackoverflow, all technologies we've had for a long time, all things developers used to have to go without at some point in history.
I don't have the answers.
ChatGPT, is that you?
Think of yourself as a chef and LLMs as ready to eat meals or a recipe app. Can ready to eat meals OR recipe apps put a chef out of business?
It is certainly more eloquent than you regarding software architecture (which was a scam all along, but conversation for another time). It will find SOME bugs better than you, that's a given.
Review code better than you? Seriously? What you're using and what you consider code review? Assume I could identify one change broke production and you reviewed the latest commit. I am pinging you and you better answer. Ok, Claude broke production, now what? Can you begin to understand the difference between you and the generative technology? When you hop on the call, you will explain to me with a great deal of details what you know about the system you built, and explain decision making and changes over time. You'll tell about what worked and what didn't. You will tell about the risks, behavior and expectations. About where the code runs, it's dependencies, users, usage patterns, load, CPU usage and memory footprint, you could probably tell what's happening without looking at logs but at metrics. With Claude I get: you're absolutely right! You asked about what it WAS, but I told you about what it WASN'T! MY BAD.
Knowledge requires a soul to experience and this is why you're paid.
Yeah, maybe the people I've worked with suck at code reviews, but that's pretty normal.
Not to say your answer is wrong. I think the gist is accurate. But I think tooling will get better at answering exactly the kind of questions you bring up.
Also, someone has to be responsible. I don't think the industry can continue with this BS "AI broke it." Our jobs might devolve into something more akin to a SDET role and writing the "last mile" of novel code the AI can't produce accurately.
> We use code rabbit and it's better than practically any human
code rabbit does find things occasionally, but it also calls things 'critical' that arent and flags issues that dont actually exist and even lies in replies sometimes...it also is extremely verbose to the point of being slog to go through... and the haikus: they are so cringe and infantilizing...
maybe its our config, but code rabbit has been underwhelming...
Yes, seriously (not OP). Sometimes it's dumb as rocks, sometimes it's frighteningly astute.
I'm not sure at which point of the technology sigmoid curve we find ourselves (2007 iPhone or 2017 iPhone?) but you're doing yourself a disservice to be so dismissive
Once you learn that it's mostly about interacting with a customer (sometimes this is yourself), you will realize the AI is pretty awful at handling even the most basic tasks.
Following a product vision, selecting an appropriate architecture and eschewing 3rd party slop are examples of critical areas where these models are either fundamentally incapable or adversely aligned. I find I have to probe ChatGPT very hard to get it to offer a direct implementation of something like a SAML service provider. This isn't a particularly difficult thing to do in a language like C# with all of the built in XML libraries, but the LLM will constantly try to push you to use 3rd party and cloud shit throughout. If you don't have strong internal convictions (vision) about what you really want, it's going to take you for a ride.
One other thing to remember is that our economies are incredibly efficient. The statistical mean of all information in sight of the LLMs likely does not represent much of an arbitrage opportunity at scale. Everyone else has access to the same information. This also means that composing these systems in recursive or agentic styles means you aren't gaining anything. You cannot increase the information content of a system by simply creating another instance of the same system and having it argue with itself. There usually exists some simple prompt that makes a multi agent Rube Goldberg contraption look silly.
> I’m basically just the conductor of all those processes.
"Basically" and "just" are doing some heroic weight lifting here. Effectively conducting all of the things an LLM is good at still requires a lot of experience. Making the constraints live together in one happy place is the hard part. This is why some of us call it "engineering".
Those twenty engineers must not have produced much.
So some people are panicking and they are probably right, and some other people are rolling their eyes and they are probably right too. I think the real risk is that dumping out loads of boilerplate becomes so cheap and reliable that people who can actually fluently design coherent abstractions are no longer as needed. I am skeptical this will happen though, as there doesn’t seem to be a way around the problem of the giant indigestible hairball (I.e as you have more and more boilerplate it becomes harder to remain coherent).
> I think the real risk is that dumping out loads of boilerplate becomes so cheap and reliable that people who can actually fluently design coherent abstractions are no longer as needed.
Cough front-end cough web cough development. Admittedly, original patterns can still be invented, but many (most?) of us don't need that level of creativity in our projects.
AI can write you an entire CRUD app in minutes, and with some back-and-forth you can have an actually-good CRUD app in a few hours.
But AI is not very good (anecdotally, based on my experience) at writing fintech-type code. It's also not very good at writing intricate security stuff like heap overflows. I've never tried, but would certainly never trust it to write cryptography correctly, based on my experience with the latter two topics.
All of the above is "coding", but AI is only good at a subset of it.
The issue is and always has been maintenance and evolution. Early missteps cause limitations, customer volume creates momentum, and suddenly real engineering is needed.
I’d be a lot more worried about our jobs if these systems were explaining to people how to solve all their problems with a little Emacs scripting. As is they’re like hyper aggressive tech sales people, happy just to see entanglements, not thinking about the whole business cycle.
But I don’t think I’ve seen pure CRUD on anything other than prototype. Add an Identity and Access Management subsystem and the complexity of requirements will explode. Then you add integration to external services and legacy systems, and that’s where the bulk of the work is. And there’s the scalability issue that is always looming.
Creating CRUD app is barely a level over starting a new project with the IDE wizard.
Perhaps the debate is on what constitutes "actually-good". Depends where the bar is I suppose.
Definitely this. When I use AIs for web development they do an ok job most of the time. Definitely on par with a junior dev.
For anything outside of that they're still pretty bad. Not useless by any stretch, but it's still a fantasy to think you could replace even a good junior dev with AI in most domains.
I am slightly worried for my job... but only because AI will keep improving and there is a chance it will be as good as me one day. Today it's not a threat at all.
If you think LLMs are “better programmers than you,” well, I have some disappointing news for you that might take you a while to accept.
This is a common take but it hasn't been my experience. LLMs produce results that vary from expert all the way to slightly better than markov chains. The average result might be equal to a junior developer, and the worst case doesn't happen that often, but the fact that it happens from time to time makes it completely unreliable for a lot of tasks.
Junior developers are much more consistent. Sure, you will find the occasional developer that would delete the test file rather than fixing the tests, but either they will learn their lesson after seeing your wth face or you can fire them. Can't do that with llms.
But why would you do that? Wouldn't you just have your own library of code eventually that you just sell and sell again with little tweaks? Same money for far less work.
Besides, not all programming work can be abstracted into a library and reused across projects, not because it's technically infeasible, but because the client doesn't want to, cannot for legal reasons or the developer process at the client's organization simply doesn't support that workflow. Those are just the reasons from the top of my head, that I've encountered before, and I'm sure there is more reasons.
> cannot for legal reasons or ...
Sure, you can't copy trade secrets, but that's also not the boilerplate part. Copying e.g. a class hierarchy and renaming all the names and replacing the class contents that represent the domain, won't be a legal problem, because this is not original in the first place.
People shouldn't be doing this in the first place. Existing abstractions are sufficient for building any software you want.
Software that doesn't need new abstractions is also already existing. Everything you would need already exists and can be bought much more cheaply than you could do it yourself. Accounting software exists, unreal engine exists and many games use it, why would you ever write something new?
This isn't true due to the exponential growth of how many ways you can compose existing abstractions. The chance that a specific permutation will have existing software is small.
But if there is something off the shelf that you can use for the task at hand? Great! The stakeholders want it to do these other 3000 things before next summer.
Or, abstractions in your project form a dependency tree, and the nodes near the root are universal, e.g. C, Postgres, json, while the leaf nodes are abstractions peculiar to just your own project.
You'll notice no one ever seems to talk about the products they're making 20x faster or cheaper.
In seriousness: I’m sure there are projects that are heavily powered by Claude, myself and a lot of other people I know use Claude almost exclusively to write and then leverage it as a tool when reviewing. Almost everyone I hear that has this super negative hostile attitude references some “promise” that has gone unfulfilled but it’s so silly: judge the product they are producing and maybe just maybe consider the rate of progress to _guess_ where things are heading
At the recent AWS conference, they were showcasing Kiro extensively with real life products that have been built with it. And the Amazon developers all allege that they've all been using Kiro and other AI tools and agents heavily for the past year+ now to build AWS's own services. Google and Microsoft have also reported similar internal efforts.
The platforms you interact with on a daily basis are now all being built with the help of AI tools and agents
If you think no one is building real commercial products with AI then you are either blind or an idiot or both. Why don't you just spend two seconds emailing your company AWS ProServe folks and ask them, I'm sure they'll give you a laundry list of things they're using AI for internally and sign you up for a Kiro demo as well
From the OP. If you think that's too much then we agree.
I love coding. But reality is reality and these fools just aren’t keeping pace with how fast the world is changing.
That's the point champ. They seem great to people when they apply them to some domain they are not competent it, that's because they cannot evaluate the issues. So you've never programmed but can now scaffold a React application and basic backend in a couple of hours? Good for you, but for the love of god have someone more experienced check it before you push into production. Once you apply them to any area where you have at least moderate competence, you will see all sorts of issues that you just cannot unsee. Security and performance is often an issue, not to mention the quality of code....
They need a heavy hand to police to make sure they do the right thing. Garbage in, garbage out.
The smarter the hand of the person driving them, the better the output. You see a problem, you correct it. Or make them correct it. The stronger the foundation they're starting from, the better the production.
It's basically the opposite of what you're asserting here.
Ahaha, weren’t you the guy who wrote an opus about planes? Is this your baseline for “stuff where LLMs break and real engineering comes into the room”? There’s a harsh wake up call for you around the corner.
Friendly reminder that this style of discourse is not very welcome on HN: https://news.ycombinator.com/newsguidelines.html
I mean from the off, people were claiming 10x probably mostly because it's a nice round number, but those claims quickly fell out of the mainstream as people realised it's just not that big a multiplier in practice in the real world.
I don't think we're seeing this in the market, anywhere. Something like 1 engineer doing the job of 20, what you're talking about is basically whole departments at mid sized companies compressing to one person. Think about that, that has implications for all the additional management staff on top of the 20 engineers too.
It'd either be a complete restructure and rethink of the way software orgs work, or we'd be seeing just incredible, crazy deltas in output of software companies this year of the type that couldn't be ignored, they'd be impossible to not notice.
This is just plainly not happening. Look, if it happens, it happens, 26, 27, 28 or 38. It'll be a cool and interesting new world if it does. But it's just... not happened or happening in 25.
One other thing I have seen however is the 0x case, where you have given too much control to the llm, it codes both you and itself into pan’s labyrinth, and you end up having to take a weed wacker to the whole project or start from scratch.
Ask it a question about something you know well, and it'll give you garbage code that it's obviously copied from an answer on SO from 10 years ago.
When you ask it for research, it's still giving you garbage out of date information it copied from SO 10 years ago, you just don't know it's garbage.
"Agents use tools in a loop to achieve a goal."
If you don't give any tools, you get hallucinations and half-truths.
But you give one a tool to do, say, web searches and it's going to be a lot smarter. That's where 90% of the innovation with "AI" today is coming from. The raw models aren't gettin that much smarter anymore, but the scaffolding and frameworks around them are.
Tools are the main reason Claude Code is as good as it is compared to the competition.
That Python you just got might look good, but could be rewritten from 50 lines to 5, it's written in 2010-style, it's not using modern libraries, it's not using modern syntax.
And it is 50 to 5. That is the scale we're talking about in a good 75% of AI produced code unless you challenge it constantly. Not using modern syntax to reduce boilerplate, over-guarding against impossible state, ridiculous amounts of error handling. It is basically a junior dev on steriods.
Most of the time you have no idea that most of that code is totally unnecessary unless you're already an expert in that language AND libraries it's using. And you're rarely an expert in both or you wouldn't even be asking as it would have been quicker to write the code than even write the prompt for the AI.
Your productivity boost will depend entirely on a combination of how much you can remove yourself from the loop (basically, the cost of validation per turn) and how amenable the task/your code is to agents (which determines your P(success)).
Low P(success) isn't a problem if there's no engineer time cost to validation, the agent can just grind the problem out in the background, and obviously if P(success) is high the cost of validation isn't a big deal. The productivity killer is when P(success) is low and the cost of validation is high, these circumstances can push you into the red with agents very quickly.
Thus the key to agents being a force multiplier is to focus on reducing validation costs, increasing P(success) and developing intuition relating to when to back off on pulling the slot machine in favor of more research. This is assuming you're speccing out what you're building so the agent doesn't make poor architectural/algorithmic choices that hamstring you down the line.
To be direct, this reads like a fluff comment written by AI with an emphasis on probability and metrics. P(that) || that.
I’ve written software used by a local real estate company to the Mars Perseverance rover. AI is a phenomenally useful tool. But be weary of preposterous claims.
Given that, if you want to revisit your comment in a constructive way rather than doing an empty drive by, I'll read your words with an open mind.
So the "verbose, straightforward code with clear cut test scenarios" is already written by a human?
I have been working professionally for ~16 years in software development, and scenarios like this was about 5% of my work.
Purely anecdotal, but I've seen that level of productivity from the vibe tools we have in my workplace.
The main issue is that 1 engineer needs to have the skills of those 20 engineers so they can see where the vibe coding has gone wrong. Without that it falls apart.
> one person is doing the work of 20 with them in december 2025 at least
it reminds me of oop hype from the 90's, but maybe indeed it will eventually be true this time...?An LLM helps most with surface area. It expands the breadth of possibilities a developer can operate on.
And of course, getting to the point where you can write a good foundation has always been the bulk of the work. I don't see that changing anytime soon.
You talk as if you haven't used a LLM since 2024. It's now almost 2026 and things have changed a lot.
Whenever I discuss the problems that my peers and I have using these things, it's always something along the lines of "but model X.Y solves all that!", so I obediently try again, waste a huge amount of time, and come back to the conclusion that these things aren't great at generation, but they are fantastic at summarization and classification.
When I use them for those tasks, they have real value. For creation? Not so much.
I've stopped getting excited about the "but model X.Y!!" thing. Maybe they are improving? I just personally haven't seen it.
But according to the AI hypers, just like with every other tech hype that's died over the past 30 years, "I must just be doing it wrong".
Will admit It's not great (probably not even good) but it definitely has throughput despite my absolute lack of caring that much [0]. Once I get past a certain stage I am thinking of doing an A-B test where I take an earlier commit and try again while paying more attention... (But I at least want to get where there is a full suite of UOW cases before I do that, for comparison's sake.)
> Those twenty engineers must not have produced much.
I've been considered a 'very fast' engineer at most shops (e.x. at multiple shops, stories assigned to me would have a <1 multiplier for points[1])
20 is a bit bloated, unless we are talking about WITCH tier. I definitely can get done in 2-3 hours what could take me a day. I say it that way because at best it's 1-2 hours but other times it's longer, some folks remember the 'best' rather than median.
[0] - It started as 'prompt only', although after a certain point I did start being more aggressive with personal edits.
[1] - IDK why they did it that way instead of capacity, OTOH that saved me when it came to being assigned Manual Testing stories...
Throughput without being good will just lead to more work down the line to correct the badness.
It's like losing money on every sale but making up for it with volume.
You lost me here. Come back when you're proud of it.
[1] I actually think it might be true for certain kinds of jobs.
> but nobody can hope to quantify that with any degree of credibility yet
i'd like to think if it was really good, we would see product quality improve over time; iow less reported bugs, less support incidents, increased sign-ups etc, that could easily be quantified no?Orchestrating harmony is no mean feat.
What doesn't help is that the current state of AI adoption is heavily top-down. What I mean is the buy-in is coming from the leadership class and the shareholder class, both of whom have the incentive to remove the necessary evil of human beings from their processes. Ironically, these classes are perhaps the least qualified to decide whether generative AI can replace swathes of their workforce without serious unforeseen consequences. To make matters worse, those consequences might be as distal as too many NEETs in the system such that no one can afford to buy their crap anymore; good luck getting anyone focused on making it to the next financial quarter to give a shit about that. And that's really all that matters at the end of the day; what leadership believes, whether or not they are in touch with reality.
What we do know is this. If AI keeps improving at the current rate it’s improving then it will eventually hit a point where we don’t need software engineers. That’s inevitable. The way for it to not happen is for this technology to hit an impenetrable wall.
This wave of AI came so fast that there are still stubborn people who think it’s a stochastic parrot. They missed the boat.
But something tells me “this time is different” is different this time for real.
Coding AIs design software better than me, review code better than me, find hard-to-find bugs better than me, plan long-running projects better than me, make decisions based on research, literature, and also the state of our projects better than me. I’m basically just the conductor of all those processes.
Oh, and don't ask about coding. If you use AI for tasks above, as a result you'll get very well defined coding task definitions which an AI would ace.
I’m still hired, but I feel like I’m doing the work of an entire org that used to need twenty engineers.
From where I’m standing, it’s scary.