So if you need to avoid GC issues, or have robust type safety, or whatever it is, to gain an edge in a certain industry or scenario, you can't just switch to the vibe tool of choice without (best case) giving up $$$ to pay to make up for the inefficiency or (worst case) having more failures that your customers won't tolerate.
But this means the gap between the "hard" work and the "easy" work may become larger - compensation included. Probably most notably in FAANG companies where people are brought in expected to be able to do "hard" work and then frequently given relatively-easy CRUD work in low-ROI ancillary projects but with higher $$$$ than that work would give anywhere else.
And the places currently happy to hire disaffected ex-FAANG engineers who realized they were being wasted on polishing widgets may start having more hiring difficulty as the pipeline dries up. Like trying to hire for assembly or COBOL today.
I use some ai tools and sometimes they're fine, but I won't in my lifetime anyway hand over everything to an AI, not out of some fear or anything, but even purely as a hobby. I like creating things from scratch, I like working out problems, why would I need to let that go?
It's really freeing to say "Well, if the linter and the formatter don't catch it, it doesn't matter". I always update lint settings (writing new rules if needed) based on nit PR feedback, so the codebase becomes easier to review over time.
It's the same principle as any other kind of development - let the machine do what the machine does well.
For now LLMs still suffers from hallucination and lack of generalizability, The large amount of code generated is sometimes not necessarily a benefit, but a technical debt.
LLMs are good for open and fast, prototype web applications, but if we need a stable, consistent, maintainable, secure framework, or scientific computing, pure LLMs are not enough, one can't vibe everything without checking details
How do we get the LLM to gain knowledge on this new language that we have no example usage of?
I hear people say things like, “AI isn’t coming for my job because LLMs suck at [language or tech stack]!”
And I wonder, does that just mean that other stacks have an advantage? If a senior engineer with Claude Code can solve the problem in Python/TypeScript in significantly less time than you can solve it in [tech stack] then are you really safe? Maybe you still stack up well against your coworkers, but how well does your company stack up against the competition?
And then the even more distressing thought accompanies it: I don’t like the code that LLMs produce because it looks nothing like the code I write by hand. But how relevant is my handwritten code becoming in a world where I can move 5x faster with coding agents? Is this… shitty style of LLM generated code actually easier for code agents to understand?
Like I said, I don’t endorse either of these ideas. They’re just questions that make me uncomfortable because I can’t definitively answer them right now.