I lowkey disagree. I think good experienced devs will be pressured to write worse software or be bottlenecked by having to deal with bad software. Depends on company and culture of course. But consider that you as expereinced dev now have to explain things that go completely over the head of the junior devs, and most likely the manager/PO, so you become the bottleneck, and all pressure will come down on you. You will hear all kinds of stuff like "80% there is enough" and "dont let perfect be the enemy of good" and "youre blocking the team, we have a deadline" and that will become even worse. Unless you're lucky enough to work in a place with actually good engineering culture.
I love that thread because it clearly shows both the benefits and pitfalls of AI codegen. It saved this expert a ton of time, but the AI also created a bunch of "game over" bugs that a more junior engineer probably would have checked in without a second thought.
Even looking strictly at coding, the hard thing about programming is not writing the code. It is understanding the problem and figuring out an elegant and correct solution, and LLM can't replace that process. They can help with ideas though.
Not really. This "review" was stretching to find things to criticize in the code, and exaggerated the issues he found. I responded to some of it: https://www.hackerneue.com/item?id=44217254
Unfortunately I think a lot of people commenting on this topic come in with a conclusion they want to reach. It's hard to find people who are objectively looking at the evidence and drawing conclusions with an open mind.
Like his first argument was that you didn't have a test case covering every single MUST and MUST NOT in the spec?? I would like to introduce him to the real world - but more to the point, there was nothing in his comments that specifically dinged the AI, and it was just a couple pages of unwarranted shade that was mostly opinion with 0 actual examples of "this part is broken".
> Unfortunately I think a lot of people commenting on this topic come in with a conclusion they want to reach. It's hard to find people who are objectively looking at the evidence and drawing conclusions with an open mind.
Couldn't agree more, which is why I really appreciated the fact that you went to the trouble to document all of the prompts and make them publicly available.
I won't say that you have converted me, but maybe I'll give LLMs a shot and judge for myself if they can be useful to me. Thanks, and good luck!
https://github.com/cloudflare/workers-oauth-provider/securit...
You can certainly make the argument that this demonstrates risks of AI.
But I kind of feel like the same bug could very easily have been made by a human coder too, and this is why we have code reviews and security reviews. This exact bug was actually on my list of things to check for in review, I even feel like I remember checking for it, and yet, evidently, I did not, which is pretty embarrassing for me.
The promise then was similar: "non-programmers" could use a drag-and-drop, WYSIWYG editor to build applications. And, IMO, VB was actually a good product. The problem is that it attracted "developers" who were poor/inexperienced, and so VB apps developed a reputation for being incredibly janky and bad quality.
The same thing is basically happening with AI now, except it's not constrained to a single platform, but instead it's infecting the entire software ecosystem.
Greed (wanting an enterprise alternative to Java and C++ builder) killed VB, not the community.
Yes there were a lot of crappy barely functioning programs made in it. But they were programs that wouldn’t have existed otherwise. Eg. For small businesses automating things vb was amazing and even if the program was barely functional it was better than nothing.
Large companies can be a red tape nightmare for getting anything built. The process overload will kill simple non-strategic initiatives. I can understand and appreciate less technical people who grab whatever tool they can to solve their own problems when they run into blockers like that. Even if they don't solve it in the best way possible according to experts in the field. That feels like the hacker spirit to me.
You’d be surprised how little effort it is compared to having to deal a massive outage. E.g. You did eventually had to think about backup power.
I think we will need to find a way to communicate “this code is the result of serious engineering work and all tradeoffs have been thought about extensively” and “this code has been vibecoded and no one really cares”. Both sides of that spectrum have their place and absolutely will exist. But it’s dangerous to confuse the two
Wrote it initially as a joke, but maybe it's not that dumb? I already do it on LinkedIn. I'm job hunting and post slop from time to time to game LinkedIn algorithms to get better positioning among other potential candidates. And not to waste anybody's time, I leave in the emotes at beginning of sentences just so people in the know know it's just slop (so as not to waste their time).
Why do you believe we should "turn our back on AI"? Have you used it enough to realize what a useful tool it can be?
Wouldn't it make more sense to learn to turn our backs on unhelpful uses of AI?
Take your photos example. Sure, the number of photos taken has exploded, but who cares if there are now reams and reams of crappy vacation photos - it's not like anyone is really forced to look at it.
With AI-generated code, I think it's actually awesome for small, individual projects. And in capable hands, they can be a fantastic productivity enhancer in the enterprise. But my heart bleeds for the poor sap who is going to eventually have to debug and clean up the mountains of AI code being checked in by folks with a few months/years of experience.
I have found time and again that enough technological advancement will make previously difficult things easy that when it's time to clean up the old stuff, it's not such a huge issue. Especially so if you do not need to keep a history of everything and can start fresh. This probably would not fly in a huge corp but it's fine for small/medium businesses. After all, whole companies disappear and somehow we live on.
Also if you're organizationally changing the culture to force people to put more effort in writing the code, why are you even organizationally using LLMs...?
Simply hire people who score high on the Conscientiousness, but low on the Agreeableness personality trait. :-)
Yeah, OK, I guess you have to be a bit less unapologetic than Linux kernel maintainers in this case, but you can still shift the culture towards more careful PRs I think.
> why are you even organizationally using LLMs
Many people believe LLMs make coders more productive, and given the rapid progress of gen AI it's probably not wise to just dismiss this view. But there need to be guardrails to ensure the productivity is real and not just creating liability. We could live with weaker guardrails if we can trust that the code was in a trusted colleague's head before appearing in the repo. But if we can't, I guess stronger guardrails are the only way, aren't they?
But when I actually sit down and think it through, I’ve wasted multiple days chasing down subtle bugs that I never would have introduced myself. It could very well be that there’s no productivity gain for me at all. I wouldn’t be at all surprised if the numbers showed that was the case.
But let’s say I am actually getting 20%. If this technology dramatically increases the output of juniors and mid level technical tornadoes that’s going to easily erase that 20% gain.
I’ve seen codebases that were dominated my mid level technical tornadoes and juniors, no amount of guardrails could ever fix them.
Until we are at the point where no human has to interact with code (and I’m skeptical we will ever get there short of AGI) we need automated objective guardrails for “this code is readable and maintainable”, and I’m 99.999% certain that is just impossible.
Usually organizational changes are massive efforts. But I guess hype is a hell of an inertia buster.
I imagine if you have a say in their performance review, you might be able to set "writes code more thoughtfully" as a PIP?
I will have my word in the matter before all is said and done. While everyone is busy pivoting to AI I keep my head down and build the tools that will be needed to clean up the mess...
I'm building a universal DOM for code so that we should see an explosion in code whose purpose is to help clean up other code.
If you want to write code that makes changes to a tree of HTML nodes, you can pretty much write that code once and it will run in any web browser.
If you want to write code that makes a new program by changing a tree of syntax nodes, there are an incredible number of different and wholly incompatible environments for that code to run in. Transform authors are likely forced to pick one or two engines to support, and anyone who needs to run a lot of codemods will probably need to install 5-10 different execution engines.
Most people seem not to notice or care about this situation or realize that their tools are vastly underserving their potential just because we can't come up with the basic standards necessary to enable universal execution of codemod code, which also means there are drastically lower incentives to write custom codemods and lint rules than there could/should be
As two nits, https://docs.bablr.org/reference/cstml and https://bablr.org/languages/universe/ruby are both 404, but I suspect that latter one is just falling into the same trap as many namespaces make of using a URL when they meant it as a URN
The JSX noise is CSTML, a data format for encoding/storing parse trees. It's our main product. E.g. a simple document might look something like `<*BooleanLiteral> 'true' </>`. It's both the concrete syntax and the semantic metadata offered as a single data stream.
The easiest way to consume a CSTML document is to print the code stored in it, e.g. `printSource(parseCSTML(document))`, which would get you `true` for my example doc. Since we store all the concrete syntax printing the tree is guaranteed to get you the exact same input program the parser saw. This means you can use this to rearrange trees of source code and then print them over the original, allowing you to implement linters, pretty-printers, or codemod engines.
These CSTML documents also contain all the information necessary to do rich presentation of the code document stored within (syntax highlighting). I'm going to release our native syntax highlighter later today hopefully!
I think we already are. We're about to be drowning in a cesspit. The support for the broken software is going to be replaced by broken LLM agents.
That's my expectation as well.
The logical outcome of this is that the general public will eventually get fed up, and there will be an industry-wide crash, just like in 1983 and 2000. I suppose this is a requirement for any overly hyped technology to reach the Plateau of Productivity.
No, they won't. It's a race to the bottom.
I can take extra time to produce something that won't fall over on the first feature addition, that won't need to be rewritten with a new approach when the models get upgraded/changed/whatever and will reliably work for years with careful addition of new code.
I will get underbid by a viber who produced a turd in an afternoon, and has already spent the money from the project before the end of the week.
Even better if the accountants are using LLMs.
Or even better, hardware prototyping using LLMs with EEs barely knowing what they are doing.
So far, most software dumbassery with LLMs can at least be fixed. Fixing board layouts, or chip designs, not as easy.
Folks, we already have bad software. Everywhere.
And nobody cares.
https://gs.statcounter.com/os-market-share/desktop/worldwide...
If you want to sell high quality software, then you must be patient. Several decades worth of patient.
Good experienced devs will be able to make better software, but so many inexperienced devs will be regurgitating so much more lousy software at a pace never seen before, it's going to be overwhelming. Or as the original commenter described, they're already being overwhelmed.