As a maintainer, it used to be I could merge code that “looked good”, and if it did something subtly goofy later I could look in the blame, ping the guy who wrote it, and get a “oh yeah, I did that to flobberate the bazzle. Didn’t think about when the bazzle comes from the shintlerator and is already flobbed” response.
People who wrote plausible looking code were usually decent software people.
Now, I would get “You’re absolutely right! I implemented this incorrectly. Here’s a completely different set of changes I should have sent instead. Hope this helps!”
the submitter could also bail just as easily. Having an AI make the PR or not makes zero difference for this accountability. Ultimately, the maintainer pressing the merge button is accountable.
What else would your value be as a maintainer, if all you did was a surface look, press merge, then find blame later when shit hits the fan?
> "if all you did was a surface look, press merge"
As per the old joke, surface look: $5
Years of experience learning what to look for: $995
In the past a block of code that has jarring flaws says the author was likely low skill, or careless. People can fake competence but it's a low return because ugly inconsistent code with no comments and no error checking which (barely) works will keep someone employed and paid, more than pretty code which doesn't work at all will. Writing pretty code which also works implies knowledge, care, eye for detail, effort, tooling, which implies the author will have put some of that into solving the problem. AI can fake all the quick indicators of competence without the competence, meaning the surface look is less useful.
> "What else would your value be as a maintainer"
Is the maintainer paid or unpaid? If they are paid, the value is to make sure the software works and meets the business standards. If they are unpaid, what is the discussion about "value" at all? Maybe to keep it from becoming wildly broken, or maybe yes to literally be the person who presses merge because somebody has to.
One path continues on the track it has always been on, human written and maintained.
The other is fully on the AI track. Massive PRs with reviewers rubber stamping them.
I’d love to see which track comes out ahead.
Edit: in fact, perhaps there are open source projects already fully embracing AI authored contributions?
Vibecoding reminds me sharply of the height of the Rails hype, products quickly rushed to market off the backs of a slurry of gems and autoimports inserted on generated code, the original authors dipping and teams of maintainers then screeching into a halt
Here the bots will pigheadedly heap one 9000 lines PR onto another, shredding the code base to bits but making it look like a lot of work in the process
- Copyright issues. Even among LLM-generated code, this PR is particularly suspicious, because some files begin with the comment “created by [someone’s name]”
- No proposal. Maybe the feature isn’t useful enough to be worth the tech debt, maybe the design doesn’t follow conventions and/or adds too much tech debt
- Not enough tests
- The PR is overwhelmingly big, too big for the small core team that maintains OCaml
- People are already working on this. They’ve brainstormed the design, they’re breaking the task into smaller reviewable parts, and the code they write is trusted more than LLM-generated code
Later, @bluddy mentions a design issue: https://github.com/ocaml/ocaml/pull/14369#issuecomment-35568...
They did: the main point being made is "I'm not reading 13k LOCs when there's been no proposal and discussion that this is something we might want, and how we might want to have it implemented". Which is an absolutely fair point (there's no other possible answer really, unless you have days to waste) whether the code is AI-written or human-written.
It reminds me of a PR I once saw (don't remember which project) in which a first-time contributor opened a PR rewriting the project's entire website in their favourite new framework. The maintainers calmly replied to the effect of, before putting in the work, it might have been best to quickly check if we even want this. The contributor liked the framework so much that I'm sure they believed it was an improvement. But it's the same tone-deafness I now see in many vibe coders who don't seem to understand that OSS projects involve other people and demand some level of consensus and respect.
But still the crux of the matter is: Massive changes require buy-in from other maintainers BEFORE the changes even start.
[1] https://github.com/aio-libs/aiosmtpd [2] https://github.com/aio-libs/aiosmtpd/pull/202
https://github.com/rerun-io/rerun/pull/11900#issuecomment-35...
https://github.com/ocaml/dune/issues/12731
https://github.com/tshort/StaticCompiler.jl/pull/180
Seems he's just on a rampage of "fixing" issues for trendy packages to get some attention.
thanks for the comedy material.
Discussions about AI/LLM code being a problem solely because AI/LLM is not generally a productive conversation.
Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.
Additionally, if there isn't a code of conduct, AI policy, or, perhaps most importantly, a policy on how to submit PRs and which are acceptable, it's a huge weakness in a project.
In this case, clearly some feathers were ruffled but cool heads prevailed. Well done in the end..