athorax parent
I think the big difference is that these aren't AI generated bug reports. They are bugs found with the assistance of AI tools that were then properly vetted and reported in a responsible way by a real person.
Basically using AI the way we have used linters and other static analysis tools, rather than thinking it's magic and blindly accepting its output.
In the defense of the language models, the bugs were written by humans in the first place. Human vetting is not much of a defense.
From what I understand some of the bugs where in code the AI made up on the spot, other bug reports had example code that didn't even interact with curl. These things should be relatively easy to verify by a human, just do a text search in the curl source to see if the AI output matches anything.
Hard to compute, easy to verify things should be the case where AI excel at. So why do so many AI users insist on skipping the verify step?
> Human vetting is not much of a defense.
The issue I keep seeing with curl and other projects is that people are using AI tools to generate bug reports and submitting them without understanding (that's the vetting) the report. Because it's so easy to do this and it takes time to filter out bug report slop from analyzed and verified reports, it's pissing people off. There's a significant asymmetry involved.
Until all AI used to generate security reports on other peoples' projects is able to do it with vanishingly small wasted time, it's pretty assholeish to do it without vetting.