Preferences

Here are 55 closed PRs in the curl repo which credit "sarif data" - I think those are the ones Daniel is talking about here https://github.com/curl/curl/pulls?q=is%3Apr+sarif+is%3Aclos...

This is notable given Daniel Stenberg's reports of being bombarded by total slop AI-generated false security issues in the past: https://www.linkedin.com/posts/danielstenberg_hackerone-curl...

Concerning HackerOne: "We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time"

Also this from January 2024: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...


tomjakubowski
Some of those bugs, like using the wrong printf-specifier for a size_t, would be flagged by the compiler with the right warning flags set. An AI oracle which tells me, "your project is missing these important bug-catching compiler warning flags," would be quite useful.

A few of these PRs are dependabot PRs which match on "sarif", I am guessing because the string shows up somewhere in the project's dependency list. "Joshua sarif data" returns a more specific set of closed PRs. https://github.com/curl/curl/pulls?q=is%3Apr+Joshua+sarif+da...

octocop
The models used have improved quite well since then, I guess his change of opinion shows that.
Twirrim
No, he's still dealing with a flood of crap, even in the last few weeks, off more modern models.

It's primarily from people just throwing source code at an LLM, asking it to find a vulnerability, and reporting it as-read, without having any actual understanding of if it is or isn't a vulnerability.

The difference in this particular case is it's someone who is: 1) Using tools specifically designed for security audits and investigations. 2) Takes the time to read and understand the vulnerability reported, and verifies that it is actually a vulnerability before reporting.

Point 2 is the most significant bar that people are woefully failing to meet and wasting a terrific amount of his time. The one that got shared from a couple of weeks ago https://hackerone.com/reports/3340109 didn't even call curl. It was straight up hallucination.

simonw OP
I think it's more about how people are using it. An amateur who spams him with GPT-5-Codex produced bug reports is still a waste of his time. Here a professional ran the tools and then applied their own judgement before sending the results to the curl maintainers.
tptacek
I keep irritating people with this observation but this was the status quo ante before AI, and at least an AI slop report shows clear intent; you can ban those submitters without even a glance at anything else they send.
davidcbc
The current scale of poor reports was absolutely not the status quo before AI
tptacek
The last time I was staffed on a project that had to do this, we were looking at many dozens per day, virtually all of them bogus, many attached to grifters hoping to jawbone the triage person into paying a nominal fee to get them to shut up. It would be weird if new tooling like LLMs didn't accelerate it, but that's all I'd expect it to do.
whizzter
It's probably also the difference of idiots hoping to cash out/get credit for vulnerabilities by just throwing ChatGPT at the wall compared to this where it seems a somewhat seasoned researcher is trialing more customized tools.

This item has no comments currently.