Granted that's because the program is incredibly poorly written, but still, context window will stay a huge barrier for quite some time.
> Most of software work is maintaining "legacy" code, that is older systems that have been around for a long time and get a lot of use.
> Granted that's because the program is incredibly poorly written
LLMs can't fix big, shitty legacy codebases. That is where most maintenance work (in terms of hours) is, and where it will remain.
I would take it one step further and argue that LLMs and vibe-coding will compound into more big, shitty legacy codebases over time, and therefore, in the long arc, nothing will really change.
We're going to have to go through another quality hangover I suspect.
But since people that have never coded are now coding and think it's the best thing ever the only way out is through.
Can it though? I thought it was most useful for writing new code, but have so far never had it correctly refactor existing code. Its refactoring attempts usually change behavior / logic, and sometimes even leave the code in a state where it's even harder to read.
Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?
Nitpicks aside, I agree that contemporary AIs can be great for quickly getting up to speed with a code base. Both a new library or language you want to be using, and your own organisation's legacy code.
One of the biggest advantages of using established ecosystem was that stack-overflow had a robust repository of already answered questions (and you could also buy books on it). With AI you can immediately cook up your own Stackoverflow community equivalent that provides answers promptly instead of closing your question as off-topic.
And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment. This will change gradually as the models become better. Just like Stackoverflow required less expertise to use than attending a university course. (And a university course requires less expertise than coming up with QuickSort in the first place.)
Not in my case (I never used SO like that, anyway). I use it almost exactly like SO, except much more quickly and interactively (and without the inference that I’m “lazy,” or “stupid,” for not already knowing the answer).
I have found that ChatGPT gives me better code than Claude (I write Swift); even learning my coding and documentation style.
I still need to review all the code it gives me, and I have yet to use it verbatim, but it’s getting close.
The most valuable thing, is I get an error, and I can ask it “Here’s the symptoms and the code. What do you think is going on?”. It usually gives me a good starting point.
I could definitely figure it out on my own, but it might take half an hour. ChatGPT will give me a solid lead in about half a minute.
I've been building things with Claude while looking at say less than 5% of the code it produces. What I've built are tools I want to use myself and... well they work. So somebody can say that I can't do it, but on the other hand I've wanted to build several kinds of ducks and what I've built look like ducks and quack like ducks so...
I've found it's a lot better at evaluating code than producing it so what you do is tell it to write some code, then tell it to give you the top 10 things wrong with the code, then tell it to fix the five of them that are valid and important. That is a much different flow than going on an expedition to find a SO solution to an obscure problem.
A good quality metric of your code is to ask an LLM to find the ten worst things about it and if all of those are all stupid then your code is pretty good. I did this recently on a codebase and it's number 1 complaint was that the name I had chosen was stupid and confusing (which it was, I'm not explaining the joke to a computer) and that was my sign that it was done finding problems and time to move on.
I would be cautious of this. I've tried this multiple times and often it produces very subtle bugs. Sometimes the code is not bad enough to have 5 defects with it, but it will comply, and change things that don't need to. You will find out in prod at some point.
SO is (was?) great when you where thinking about how nice a recursive reduce function could replace the mess you’ve just cobbled together, but language x just didn’t yet flow naturally for you.
when AI works well it is superior to Stack Overflow because what it replaces is not "Look up answer on SO, copy paste" but rather, look up several different things on SO that relate to your problem that you are trying to solve but which there is no exact definite solution posted anywhere, and copy those things together into a bit of code that you will probably just refactor a bit with a shorter time than doing all the SO look up yourself. When it works it can turn 2 hours of research into 2 minutes.
The problems are:
AI also sometimes replicates the following process - dev not understanding all parts of solution or requirements copies bits of code together from various answers making something that sort of works but is inefficient and has underlying problems.
Even with the working correctly solution your developer does not get in that 2 minutes what they used to get in the two hours before, an understanding of the problem space and how these parts of the solution hang together. This is the reason why it is more useful for seniors than juniors, because part of the looking through SO for what you want is light education.
Isn't the answer on SO the result of a human intelligence writing it in the first place, and then voted by several human intelligencies to top place? If an LLM was merely an automated "equivalent" to that, that's already a good thing!
But in general, the LLM answer you appear to dismiss amounts to a lot more:
Having an close-to-good-human-level programmer
understand your existing codebase
answer questions about your existing codebase
answer questions about changes you want to make
on demand (not confined to copying SO answers)
interactively
and even being able to go in and make the changes
That amounts to "manually typing an SO answer" about as much as a pickup truck amounts to a horse carriage.Or, to put it another way, isn't "the logical endpoint" of hiring another programmer and asking them to fix X "equivalent to printing out a Stackoverflow answer and manually typing it into their computer"?
>And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment.
Well, we shouldn't be using either blindly anyway. Not even the input of another human programmer (that's way we do PR reviews).
The word "merely" is doing all of the heavy lifting here. Having human intelligence in the loop providing and evaluating answers is what made it valuable. Without that intelligence you just have a machine that mimics the process yet produces garbage.
That's not some new claim that I made. My answer accepts the premise already asserted by the parent that LLM is "equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting".
My point: if that's true, and LLM is "merely that", then that's already valuable.
>Having human intelligence in the loop providing and evaluating answers is what made it valuable. Without that intelligence you just have a machine that mimics the process yet produces garbage.*
Well, what we actually have is a machine that even without AGI, it does have more practical intelligence than to merely produce garbage.
A machine which programmers than run circles around you and me still use, and find it produces acceptable code, fit for the purpose, and possible to get it to fix any potential initial issues in a first iteration it gives, too.
If it merely produced garbage or was no better than random chance we wouldn't have this discussion.
Here's an example I saw on twitter. Asking an LLM to document a protocol from the codebase:
https://ampcode.com/threads/T-f02e59f8-e474-493d-9558-11fddf...
Do you think you will be able to capture any of this extra value? I think I'm faster at coding, but the overall corporate project timeline feels about the same. I feel more relaxed and confident that the work can be done. Not sure how to get a raise out of this.
Agentic workflows for me results in bloated code, which is fine when I'm willing to hand over an subsystem to the agent, such as a frontend on a side project and have it vibe code the entire thing. Trying to get clean code erases all/most of my productivity gains, and doesn't spark joy. I find having a back-end-forth with an agent exhausting, probably because I have to build and discard multiple mental models of the proposed solution, since the approach can vary wildly between prompts. An agent can easily switch between using Newton-Raphson and bisection when asked to refactor unrelated arguments, which a human colleague wouldn't do after a code review.
If you do care about these things, it will take you overall longer to write the code with an LLM than it would by hand-crafting it. I started playing around with Claude on my hobby projects, and found it requires an enormous amount of exhausting handholding and post-processing to get the code to the point where I am really happy with it as a consistent, complete, expressive work of art that I would be willing to sign my name to.
This really is what businesses want and always have wanted. I've seen countless broken systems spitting out wrong info that was actively used by the businesses in my career, before AI. They literally did not want it fixed when I brought it up because dealing with errors was part of the process now in pretty much all cases. I don't even try anymore unless I'm specifically brought on to fix a legacy system.
>that I would be willing to sign my name to.
This right here is what mgmt thinks is the big "problem" that AI solves. They have always wanted us to magically know what parts are "good enough" and what parts can slide but for us to bear the burden of blame. The real problem is same as always bad spec. AI won't solve that but it will in their eyes remove a layer in their poor communication. Obviously no SWE is going to build a system that spit out wrong info and just say "hire people to always double check the work" or add it to so-so's job duties to check, but that really is the solution most places seem to go with by lack of decision.
Perhaps there is some sort of failure of SWE's to understand that businesses don't care. Accounting will catch the expensive errors anyway. Then Execs will bull whip middle managers and it will go away.
It does matter, but it's one requirement among many. Engineers think quality metrics as you listed are the most important requirements, but that's not typically true.
> Trying to get clean code erases all/most of my productivity gains, and doesn't spark joy. I find having a back-end-forth with an agent exhausting, probably because I have to build and discard multiple mental models of the proposed solution, since the approach can vary wildly between prompts
You probably work on something that requires very unique and creative solutions. I work on dumb business software. Claude Code is generally good at following existing code patterns. As far as back-and-forth with Claude Code being exhausting, I have few tips how how to minimize number or shots required to get good solution from CC: 1. Start by exploring relevant code by asking CC questions. 2. Then use Plan Mode for anything more than trivial change. Using Plan Mode is essential. You need to make sure you and CC are on the same page BEFORE it starts writing code 3. If you see CC making same mistake over and over, add instructions to your CLAUDE.md to avoid it in the future. This way your CC setup improves over time, like a coworker who learns over time.
Maybe parent is a galaxy-brained genius, or.. maybe they are just leaving work early and creating a huge mess for coworkers who now must stay late. Hard to say. But someone who isn't interested in automating/encoding processes for their idiosyncratic workflows is a bad engineer, right? And someone who isn't interested in sharing productivity gains with coworkers is basically engaged in sabotage.
Who says they aren't interested in sharing? To give a less emotionally charged example: I think my specific use pattern of Git makes me (a bit) more productive. And I'm happy to chew anyone's ear off about it who's willing to listen.
But the willingness and ability of my coworkers to engage in git-related lectures, while greater than zero, is very definitely finite.
I'll have to vigorously dissent on this notion: we sell our labor to employers - not our souls. Our individual labor, contracts and remuneration are personalized. Our labor. Not some promise to maximize productivity - that's a job for middle and upper management.
Your employer sure as hell won't directly share 8x productivity gains with employees. The best they can offer is a once-off, 3-15% annual bonus (based on your subjective performance, not the aggregate), alternatively, if you have RSU/options, gains on your miniscule ownership fraction.
Also, I used this same process to address a bug that is many years old in a very popular library this week. Admittedly, the first solution was a little wordy and required some back and forth, but I was able to get to a clean tested solution with little pain.
But them being humans, they do want to brag about it.
Coding this way requires an effort that is equal to both designing, coding, and reviewing except the code i review isnt mine. Strange situation.
I haven’t had to write a line of code in a year. First ChatGPT and more recently Claude Code.
I don’t do “agentic coding”. I keep my hands on the steering wheel and build my abstractions and modules up step by step. I make sure every line of code looks like something I would write.
I’m a staff consultant (cloud + app dev) and always lead projects, discovery and design and depending on the size of the project, do all of the actual hands on work myself.
I would have had to staff at least one maybe two less senior consultants to do the actual hands on work before. It’s actually easier for me to do the work then having to have really detailed requirements and coordinating work (the whole “Mythical Man Month” thing).
FWIW: before the pearl clutching starts, I started coding in assembly in 1986 on an Apple //e and have been delivering production level code since 1996.
I don't need LLMs, I need some kind mind reading device :D
> I find having a back-end-forth with an agent exhausting, probably because I have to build and discard multiple mental models of the proposed solution, since the approach can vary wildly between prompts
Just right now I had it improve QR payments on POS. This is standard stuff, and I have done it multiple time but i'm happy I didn't have to spend the mental energy to implement it and just had to review the code and test it.
```
Perfect! I've successfully implemented comprehensive network recovery strategies for the OnlinePaymentModal.tsx file. Here's a summary of what was added:
Implemented Network Recovery Strategies
1. Exponential Backoff for Polling (lines 187-191)
2. Network Status Detection (lines 223-246, 248-251)
3. Transaction Timeout Handling (lines 110-119)
4. Retry Logic for Initial Transaction (lines 44-105)
5. AbortController for Request Cancellation (lines 134-139, 216-220)
6. Better Error Messaging (lines 85-102, 193-196)
7. Circuit Breaker Pattern (lines 126-132)
All strategies work together to provide a robust, user-friendly payment
experience that gracefully handles network issues and automatically
recovers when connectivity is restored.
```> An agent can easily switch between using Newton-Raphson and bisection when asked to refactor unrelated arguments, which a human colleague wouldn't do after a code review.
Can you share what domain your work is in? Is it deeptech. Maybe coding agents right now work better for transactional/ecommerce systems?
One of my big issues with LLM coding assistants is that they make it easy to write lots & lots of code. Meanwhile, code is a liability, and you should want less of it.
You are talking about something like network layers in graphql. That's on our roadmap for other reasons(switching api endpoints to digital ocean when our main cloudflare worker is having an outage), however even with that you'll need some custom logic since this is doing at least two api calls in succession, and that's not easy to abstract via a transaction abstraction in a network layer(you'll have handle it durably in the network layer like how temporal does).
Despite the obvious downsides we actually moved it from durable workflow(cf's take of temporal) server side to client since on workflows it had horrible and variable latencies(sometimes 9s v/s consistent < 3s with this approach). It's not ideal, but it makes more sense business wise. I think many a times people miss that completely.
I think it just boils down to what you are aiming. AI is great for shipping bugfixes and features fast. At a company level I think it also shows in product velocity. However I'm sure very soon our competitors will catch up when AI skepticism flatters.
That's not the definition of legacy. Being there for a long time and getting lots of use is not what makes a legacy project "legacy".
Legacy projects are characterized by not being maintained and having little to no test coverage. The term "legacy" means "I'm afraid to touch it because it might break and I doubt I can put it back together". Legacy means resistance to change.
You can and do have legacy projects created a year or two ago. Most vibecoded apps fit the definition of legacy code.
That is why legacy projects are a challenge to agentic coding. Agents already output huge volumes of code changes that developers struggle to review, let alone assert it's correctness. On legacy projects that are in production, this is disastrous.
No, not necessarily. Business decisions are one of the many factors in creating legacy code,but they are by no means the single cause. A bigger factor is developers mismanaging a project to the point they became a unmaintainable mess. I personally went through a couple of projects which were legacy code the minute they hit production. One of them was even a proof of concept that a principal engineer, one of those infamous 10x types, decided to push as-is to production and leave two teams of engineers to sort out the mess left in his wake.
I recommend reading "Working Effectively with Legacy Code" by Michael Feathers. It will certainly be eye-opening to some, as it dispels the myth that legacy code is a function of age.
Then we disagree on the definition.
"Legacy" projects to me are those that should've went through at least two generational refactorings but haven't because of some unfathomable reason. These are the ones that eventually end up being rewritten from scratch because it's faster than trying to upgrade the 25 year old turd.
If a project is perfectly maintainable and developer teams are confident they can roll out any change they see fit with total confidence without having to risk nasty regressions or work around any pain points, then obviously it is not a legacy project as per definition.
If instead you have services running on Java25 that you can't touch a code path without breaking it or knowing it broke, that is a legacy project.
It is not about the age of a framework. It's about the ability to maintain it. Legacy is synonym with being unmaintained and unmaintainable. Not age.
At least that's my experience. Anyone else brought in during the last 5 years is usually terrified of changing anything because there are so many obscure corners and dependencies nobody can predict before stuff just breaks down.
You are failing to understand that the key factor even in your example is maintenance, not age. You can and often do have legacy projects that use the latest and greatest frameworks. That's not the differentiating factor.
> At least that's my experience. Anyone else brought in during the last 5 years is usually terrified of changing anything because there are so many obscure corners and dependencies nobody can predict before stuff just breaks down.
The obscure corners and dependencies are the output of lack of maintenance, not age. That's why you can and often have brand new projects that are unmaintainable, and dismissed as legacy.
also, the Hyperion Cantos was a great series.
This is the key take right here. LLMs excel at parsing existing content, summarizing it, and use it to explore scenarios and hypotheticals.
Even the best coding agents out there such as Claude Code or Gemini often fail to generate at the first try code that actually compiles, let alone does what it is expected to do.
Apologists come up with excuses such as the legacy software is not architected well enough to be easily parseable by LLMs but that is a cheap excuse. The same reference LLMs often output utter crap in greenfield projects they themselves generated, and do so after a hand full of prompts. The state of a project is not the issue.
The world is coming to the realization that the AI hype is not delivering. The talk of AI bubble is already mainstream. But like IDEs with auto complete, agents might not solve every problem but they are nevertheless useful and are here to stay. They are more a kin to a search engine where a user doesn't need to copy/paste code snippets to apply a code change.
The sooner we realize this, the better.