Preferences

refulgentis parent
*Finally.* After dozens of exchanges, you've stated a clear, testable technical claim: quality is capped by the static analyzer, not the LLM; better models won't help much, better analyzers will. That's actually interesting and worth discussing.

But let's be clear about how we got here. You opened with "Something sounds fishy...I don't think they were [found by AI]." When challenged, you moved to "I concede LLM involvement but want to specify its role." Now you're at a specific testable hypothesis about quality caps. That's a lot of ground to cover while insisting everyone else has been evasive.

On your technical claim:

You might be wrong. Here's why the LLM could matter more than you think:

Static analyzers produce massive amounts of potential findings. The problem has never been "they can't detect anything"—it's that they detect too much, with too many false positives, requiring expert judgment to separate signal from noise. That triage step requires understanding code context across files, project architecture and conventions, whether a potential issue is reachable, whether existing mitigations make it irrelevant, and how severe it actually is

If LLMs can do that context synthesis effectively—and early evidence suggests they can—then the bottleneck shifts. Your prediction assumes the analyzer's initial detection is the limiting factor. The opposing view is that contextualized triage is the limiting factor, and LLMs are good at exactly that kind of synthesis.

That's testable. Run the same analyzer with human triage, basic filtering, and LLM triage. If you're right, they'll find the same bugs. If others are right, LLM triage will surface meaningful issues the other approaches miss.

On "there was simply no technical discussion":

This is flatly false. tptacek explained that SAST tools have been commercially ineffective for decades despite hundreds of millions in investment, that the triage bottleneck was the problem, and that LLM orchestration is the new variable. That's technical substance. You dismissed it, but it was there.

I described using GPT-3 to port color science code across multiple languages, explaining direct experience with AI-assisted development. That's concrete technical detail.

You can disagree with these points. But claiming they don't exist is either dishonest or you're not actually reading what people write.

On sanitizers:

You're using this as evidence that static analysis didn't fail, but sanitizers (AddressSanitizer, MemorySanitizer, etc.) are dynamic analysis—runtime instrumentation, not static analysis. They're not counterexamples to claims about SAST tools. The conversation moved on because your example was off-topic.

On "let's not use credentials":

Show me where someone did. Find me one comment where someone said "this is true because I worked at Google, full stop" without also providing technical explanation.

You can't, because it didn't happen. Every time credentials came up, they were context for a substantive technical point. I mentioned my background while explaining my direct experience. tptacek identified as a security professional while explaining the SAST triage problem. You've been fighting a phantom so you could righteously reject authority instead of engaging with the actual arguments being made.

On the pattern:

You've consistently reframed disagreement as suppression. People are "trying to make me stop describing how to achieve a similar quality system." They're "upset" their authority isn't working. They're being "evasive" without you ever specifying what's being evaded

This isn't skepticism. It is a reflexive defensiveness that treats every substantive response as an attack. It's made this conversation take 10x longer than necessary and turned it into arguments about the arguments instead of the actual technical question.

The bottom line:

You have a testable hypothesis about whether LLM triage is transformative or marginal. That's worth discussing. But you've been needlessly unpleasant, demonstrably wrong about what's in this thread, and you've burned a lot of goodwill from people who tried to engage you seriously.

If you want to talk about the technical question, I'm here. But stop pretending you've been stonewalled when multiple people have given you detailed responses you simply didn't like.


alganet
> Finally.

I presented that statement in my first comment. It is still there, unedited. I also pointed to it several times.

There was a choice to focus on the opinion-based stuff or the technical stuff. Other users pointed out that this thread was about learning how it works (they got it right), but you also ignored them.

> sanitizers (AddressSanitizer, MemorySanitizer, etc.) are dynamic analysis—runtime instrumentation, not static analysis.

Fair enough. They are traditional non-AI tools though. It's like you're trying to catch me on a technicality (I mislabelled something, but it doesn't actually interfere much in the point I was trying to get across).

---

Of course things escalated, and I was harsh on purpose. When someone starts with too much "I am this and that, I worked on this and that", you close doors, not open them.

> pretending you've been stonewalled

I don't think I was stonewalled or anything. As I previously mentioned, this is on a subthread of a comment that got flagged. Absolutely no one is reading this, it's an irrelevant conversation. If anything, I am extending a courtesy here of answering the guy who got his comment removed.

> If you want to talk about the technical question, I'm here.

I have zero need to talk about this.

This item has no comments currently.