Libraries are maintained by other humans, who stake their reputation on the quality of the library. If a library gets a reputation of having a lax maintainer, the community will react.
Essentially, a chain of responsibility, where each link in the chain has an incentive to behave well else they be replaced.
Who is accountable for the code that AI writes?
Doesn't matter, I'm not responsible for maintaining that particular code
The code in my PRs has my name attached, and I'm not trusting any LLM with my name
If you consider that AI code is not code any human needs to read or later modify by hand, AI code is modified by AI. All you want to do is just fully test it, if it all works, it's good. Now you can call into it from your own code.
I'm ultimately still responsible for the code. And unlike AI, library authors but their and their libraries reputation on the line.
"A computer can never be held accountable therefore a computer should never make a management decision"
I think we need to go back to this. I think a computer cannot be held accountable so a computer should never make any decision with any kind of real world impact
The distinction isn't whether code comes from AI or humans, but how we integrate and take responsibility for it. If you're encapsulating AI-generated code behind a well-defined interface and treating it like any third party dependency, then testing that interface for correctness is a reasonable approach.
The real complexity arises when you have AI help write code you'll commit under your name. In this scenario, code review absolutely matters because you're assuming direct responsibility.
I'm also questioning whether AI truly increases productivity or just reduces cognitive load. Sometimes "easier" feels faster but doesn't translate to actual time savings. And when we do move quicker with AI, we should ask if it's because we've unconsciously lowered our quality bar. Are we accepting verbose, oddly structured code from AI that we'd reject from colleagues? Are we giving AI-generated code a pass on the same rigorous review process we expect for human written code? If so, would we see the same velocity increases from relaxing our code review process amongst ourselves (between human reviewers)?