EGreg parent
Why don’t you automate this checking with AI? You can then cover hundreds of PRs a day.
> You can then cover hundreds of PRs a day.
I would argue you haven't covered any.
Why not just skip the reviews then? If you can trust the models to have the necessary intelligence and context to properly review, they should be able to properly code in the first place. Obviously not where models are at today.
Not necessarily. It's like the Generative Adversarial Network (GAN). You don't just trust the generator, but it's a back-and-forth between the Generator and Discriminator.
The discriminator is trained on a different objective than the generator, it's specifically trained on being good at discriminating, so it is complimentary.
Here we are talking about the same model doing the review (even if you use a different model provider, it's still trained on essentially the same data, with the same objective and very similar performances).
We have had agentic systems where one agent checks the work of another since 2+ years, this isn't a paradigm pushed by AI coding model providers because it doesn't really work that well, review is still needed.
Turtles all the way down. We seem to be marching towards a future like that, but are we there today? Some of the AI-generated PRs I’ve seen teammates put out “work” (because sometimes two wrongs make a right) but convince me we still need a human in the loop.
But that was two weeks ago; maybe it’s different today