For coding, "it seems to work for me" is good enough. For software engineering, it's not.
My new rule: For coding, you can use AI to write your code. For software engineering, you can't.You can 100% use AI for software engineering. Just not by itself, you need to currently be quite engaged in the process to check it and redirect it.
But AI lowers the barrier to writing code and thus it brings people will less rigour to the field and they can do a lot of damage. But it isn't significantly different than programming languages made coding more accessible than assembly language - and I am sure that this also allowed more people to cause damage.
You can use any tools you want, but you have to be rigorous about it no matter the tool.
This is a pretty common sentiment. I think it equates using AI with vibe-coding, having AI write code without human review. I'd suggest amending your rule to this:
> For coding, you can use AI. For software engineering, you can't.
You can use AI in a process compatible with software engineering. Prompt it carefully to generate a draft, then have a human review and rework it as needed before committing. If the AI-written code is poorly architected or redundant, the human can use the same AI to refactor and shape it.
Now, you can say this negates the productivity gains. It will necessarily negate some. My point is that the result is comparable to human-written software (such as it is).
Just don't expect to get decent code often if you mostly rely on something like cursor's default model.
You literally get what you pay for.
That's my policy in each of my clients and it works fine, if AI makes something simpler/faster, good for the author, but there's 0, none, excuses for pushing slop or code you haven't reviewed and tested yourself thoroughly.
If somebody thinks they can offset not just authoring or editing code, but also taking the responsibility for it and the impact it has on the whole codebase and the underlying business problem they should be jobless ASAP as they are de facto delegating the entirety of their job to a machine, they are not only providing 0 value, but negative value in fact.
These discussions are always about tactics and never operations.
No, not if I have to maintain it.
Code is liability. LLM written PRs often bring net negative value: they make the whole system larger, more brittle, and less integrated. They come at the cost of end user quality and maintainer velocity.
Obviously there’s nuance (I’ll take slop food for starving people over a healthy meal for a limited few if we’re forced to choose), but the perverse incentives in society start to take over if we allow ourselves to be ok with slop. Continuously chasing the bottom of the barrel makes it impossible for high quality to exist for anyone except the rich.
Put another way: if we as a society said “it is illegal to make slop food”, both the poor and the rich would have easy access to healthy food. The cost here would be born by the rich, as they profit off food production and thus would profit less to keep quality high.
Even in less desperate teams, as productivity grows with AI (mine does, even if I don't author code with it it's tremendous help in just navigating repos and connecting the dots, it saves me so much time...) the reviewing pressure increases too, and with that fatigue.
It is not a worthwhile use of my time to similarly "coach" LLM slop.
The classic challenge with junior engineers is that helping them ship something is often more work than just doing it yourself. I'm willing to do that extra work for a human.
Vibing and good enough is a terrible combination, as unknown elements of the system grow at a faster rate than ever.
Using LLMs while understanding every change and retaining a mental model of the system is fine.
Granted, I see vibe+ignorance way too often as it is the short-term path of least resistance in the current climate of RTO and bums in seats and grind and ever more features.
LLMs can make mistakes. Humans can't.
Humans can and do make mistakes all the time. LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.I think think the underlying problem people have is they don't trust themselves to review code written by others as much as they trust themselves to implement the code from scratch. Realistically, a very small subset of developers do actual "engineering" to the level of NASA / aerospace. Most of us just have inflated egos.
I see no problem modelling the problem, defining the components, interfaces, APIs, data structures, algorithms and letting the LLM fill in the implementation and the testing. Well designed interfaces are easy to test anyway and you can tell at a glance if it covered the important cases. It can make mistakes, but so would I. I may overlook something when reviewing, but the same thing often happens when people work together. Personally I'd rather do architecture and review at a significantly improved speed than gloat I handcrafted each loop and branch as if that somehow makes the result safer or faster (exceptions apply, ymmv).
LLMs can make mistakes. Humans can't."
No, that's not it. The difference between humans and AI is that AI suffers no embarrassment or shame when it makes mistakes, and the humans enthusiastically using AI don't seem to either. Most humans experience a quick and viseral deterrent when they publish sloppy code and mistakes are discovered. AI, not at all. It does not immediately learn from its mistakes like most humans do.In the rare case when there is a human that is consistently persistently confidently wrong like AI, a project can identify that person and easily stop wasting their time working with that person. With masses of people being told by the vocal AI shills how amazing AI is, projects can easily be flooded with confidently wrong aaI generated PRs.
in my experience these tests don't test anything useful
you may you have 100% test coverage, but it's almost entirely useless but not testing the actual desired behaviour of the system
rather just the exact implementation
Brittle meaningless tests tend to lock bad decisions in, and prevent meaningful refactoring.
Bad tests simply are code debt, and dramatically increase the future cost of rework and adaptation.
I set boundaries during design where I choose responsibilities, interfaces and names. Red Green Refactor is very useful for beginners who would otherwise define boundaries that are difficult to test and maintain.
I design components that are small and focused so their APIs are simple and unit tests are incredibly easy to define and implement, usually parametrized. Unit tests don't keep me "sane", they keep me sleeping well at night because designing doesn't drive me mad. They don't define how the "program" is supposed to work, they define how the unit is supposed to work. The smaller the unit the simpler the test. I hope you agree: simple is better than complex. And no, I don't subscribe to "you only need integration tests".
Otherwise, nice battery of ad hominems you managed to slip in: my understanding of quality software is lacking, my problem is my approach to engineering and I'm an immature developer. All that from "LLMs can automate most of the boring stuff, including unit tests with 100% coverage." because you can't fathom how someone can design quality software without TDD, and you can't steelman my argument (even though it's recommended in the guidelines [1]). I do review and correct the LLM output. I almost always ask it for specific test cases to be implemented. I also enjoy seeing most basic test cases and most edge cases covered. And no, I don't particularly enjoy writing factories, setups, and asserts. I'm pretty happy to review them.
[1] https://news.ycombinator.com/newsguidelines.html Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
He's got like 50 repos with vibe-coded, non-working Zig and Rust projects. And he clearly manages to confuse people with it:
(Honestly, that's a lot more patience than I'd be able to give what are mostly AI-generated replies, so kudos to these folk.)
I'm a big fan of LLMs but this guy is just a joke. He understand nothing of the code the LLM generates. He says things like "The LLM knows".
He is not going to convince anybody to merge is PRs, since he is not even checking that the tests the LLM generates are correct. It's a joke.
> Beats me. AI decided to do so and I didn't question it.°
I find that sort of attitude terrifying.° https://github.com/ocaml/ocaml/pull/14369#issuecomment-35573...
function estimate_method_targets(func_name::Symbol, types::Tuple)
# Conservative estimate
# In a real implementation, we'd query the method table
return 2 # Assume multiple possibilities
end
Hilarious. Was this model trained on XKCD [0] by any chance?[1] https://discourse.julialang.org/t/ai-generated-enhancements-...
Actually, I probably shouldn't make this comment publicly. It could cause another 3-5 programmer-isekai anime series.
If anyone’s answer to “why does your PR do this” is “I don’t know, the AI did it and I didn’t question it” then they need a time out.
I don't know whether to be worried or impressed.
Yes, I made mistakes along the way.
The bottleneck is not coding or creating a PR, the bottleneck is the review.
It could first judge whether the PR is frivolous, then try to review it, then flag a human if necessary.
The problem is that Github, or whatever system hosts the process, should actively prevent projects from being DDOS-ed with PR reviews since using AI costs real money.
As a troll job for the lulz it is some amazing work. Hats off
The breezy "challenge me on this" and "it's just a proof of concept" remarks are infuriating. Pull requests are not conversation starters. They aren't for promoting something you think people should think about. The self-absorption and self-indulgence beggar belief.
Your homepage repeatedly says you're open to work and want someone to hire you. I can't imagine anybody looking at those PRs or your behavior in the discussions and concluding that you'd be a good addition to a team.
The cluelessness is mind-boggling.
It's so bad that I'm inclined to wonder whether you really are human -- or whether you're someone's stealthy, dishonest LLM experiment.
> Claude discovered a bug in the Zig compiler and is in the process of fixing it!
...a few minutes later...
https://github.com/ziglang/zig/pull/25974
I can see a future job interview scenario:
- "What would you say is your biggest professional accomplishment, Joel?"
- "Well, I almost single-highhandedly drove Zig away from Github"
If you think about it, Joel is net positive to Zig and its community!
the bootlicking behavior must must be like crack for wannabes. jfc
>I did not write a single line of code but carefully shepherded AI over the course of several days and kept it on the straight and narrow.
>AI: I need to keep track of variables moving across registers. This is too hard, let’s go shopping… Me: Hey, don’t any no shortcuts!
>My work was just directing, shaping, cajoling and reviewing.
How people can say that without the slightest bit of reflection on whether they're right or just spitting BS
I don't know enough about the project to know if it makes any sense, but the Zig contributor seemed confused (at least about the title).
I made the mistake of poorly documenting that PR.
But yeah hard to say
I would offer this one instead.
I will look into renaming myself, although don't think HN allows this.
When I was a kid, every year I'd get so obsessed about Christmas toys that the hype would fill my thoughts to the point I'd feel dizzy and throw up. I genuinely think you're going through the adult version of that: your guts might be ok but your mind is so filled with hype that you're losing self-awareness.
This is not my first interaction with him outside HN, I already talked to him privately when this was starting to unfold in the OCaml community. Then I gave up, flagged his ads for the mods and blocked him for a few months, but I've kept encountering his drama on github and HN.
> existing open source projects are not ready for this and likely won't ever be.
i.e. he is enlightened and these projects are just not seeing The Way™.
"Thanks X Person"
You're the direct cause to open source burn out.
In either case I'd argue it is no longer good faith if asked to stop and you continue and do not learn from your peers.
Hilarious how the offender on "exhibit A" [1] is the same one from the other post that made the frontpage a couple of days ago [2].
[1] https://github.com/ziglang/zig/issues/25974
[2] https://www.hackerneue.com/item?id=46039274