LLMs remove the easy work from the junior devs task pile. That will make it a lot more difficult for them to do the actual hard work required of a dev. They skipped the stepping stones and critical thinking phase of their careers.
Senior devs are senior because they’ve done the easy things so often it’s second nature.
Tools can't replace human understanding of a problem, and that understanding is the foundation for effective growth and maintenance of code.
Maybe an AI would be better on the easy cases- slightly faster and cheaper. But it would mean that she would never develop the skills to tackle the problems that AI has no idea how to handle.
But that’s the point. The feedback loop is faster; AI is much worse at coping with poor code than humans are, so you quickly learn to keep the codebase in top shape so the AI will keep working. Since you saved a lot of time while coding, you’re able to do that.
That doesn’t work for developers who don’t know what good code is, of course.
I disagree. I expect that companies will try to overcome AI-generated technical debt by throwing more AI at the problem.
"If the code doesn't work just throw it away and vibe code new code to replace it"
It's something that is... Sort of possible I guess but it feels so shortsighted to me
Maybe I just need to try and adjust to a shortsighted world
They could iterate with their LLM and ask it to be more concise, to give alternative solutions, and use their judgement to choose the one they end up sending to you for review. Assuming of course that the LLM can come up with a solution similar to yours.
Still, in this case, it sounds like you were able to tell within 20s that their solution was too verbose. Declining the PR and mentioning this extra field, and leaving it up to them to implement the two functions (or equivalent) that you implemented yourself would have been fine maybe? Meaning that it was not really such a big waste of time? And in the process, your dev might have learned to use this tool better.
These tools are still new and keep evolving such that we don't have best practices yet in how to use them, but I'm sure we'll get there.
Assuming of course that the LLM can come up with a solution similar to yours.
I have idle speculations as to why these things happen, but I think in many cases they can't actually. They also can't tell the junior devs that such a solution might exist if they just dig further. Both of these seem solvable, but it seems like "more, bigger models, probed more deeply" is the solution, and that's an expensive solution that dings the margins of LLM providers. I think LLM providers will keep their margins, providing models with notable gaps and flaws, and let software companies and junior devs sort it out on their own.
In part this is because the process of development leans less hard on the discipline of devs; humans. Code becomes more formal.
I regularly I have a piece of vibe-coded code in a strongly typed language, and it does not compile! (would that count as a hallucination?) I have thought many times: in Python/JS/Ruby this would just run, and only produce a runtime error in some weird case that likely only our customers on production will find...
I'm a proponent of functional programming in general, but I don't think neither types (of any "strength") nor functional programming makes it easier or harder to write bad code. Sure, types might help avoid easy syntax errors, but can also give the developer false confidence with "if it compiles it works :shrug:". Instruct the LLM to figure out the solution until it compiles, and you'll get the same false confidence, if there is nothing else asserting the correct behavior, not just the syntax.
> in Python/JS/Ruby this would just run
I'm not sure how well versed with dynamic languages you are, especially when writing code for others, but you'll in 99% cases cover at the very least all the happy paths with unit tests, and if you're planning on putting it in a production environment, you'll also do the "sad" paths. Using LLMs or not shouldn't change that very basic requirement.
Imagine if wat (https://www.destroyallsoftware.com/talks/wat) appeared on the internet, and execs took it serious and suddenly asked people to actually explicitly make everything into JS.
This is how it sounds when I hear executives pushing for things like "vibe-coding".
> More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over
Yeah, this is true. The trick is to never go beyond one response from the LLM. If they get it wrong, start over immediately with a rewritten prompt so they get it right on the first try. I'm treating "the LLM got it wrong" as "I didn't make the initial user/system prompt good enough", not as in "now I'm gonna add extra context to try to steer it right".
It’s hard to predict how this plays out IMO. Especially since this industry (broadly speaking) doesn’t believe in training juniors anymore.
And a third AI to review the pseudocode, I guess.
More seriously, I think that this is generally the correct approach: create a script that the AIs can follow one step at a time; update the script when necessary.
As an example, I asked one of my devs to implement a batching process to reduce the number of database operations. He presented extremely robust, high-quality code and unit tests. The problem was that it was MASSIVE overkill.
AI generated a new service class, a background worker, several hundred lines of code in the main file. And entire unit test suites.
I rejected the PR and implemented the same functionality by adding two new methods and one extra field.
Now I often hear comments about AI can generate exactly what I want if I just use the correct prompts. OK, how do I explain that to a junior dev? How do they distinguish between "good" simple, and "bad" simple (or complex)? Furthermore, in my own experience, LLMs tend to pick up to pick up on key phrases or technologies, then builds it's own context about what it thinks you need (e.g. "Batching", "Kafka", "event-driven" etc). By the time you've refined your questions to the point where the LLM generate something that resembles what you've want, you realise that you've basically pseudo-coded the solution in your prompt - if you're lucky. More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over. This is also something that junior devs don't seem to understand.
I'm still bullish on AI-assisted coding (and AI in general), but I'm not a fan at all of the vibe/agentic coding push by IT execs.