Several people have written lengthy, detailed responses to your questions. They've provided technical context, domain experience, and specific explanations. Your response pattern has been to suggest they're being evasive, that they're trying to suppress your ideas, or that they're protecting commercial interests. And now "it's really not my problem" when someone asks you to be more courteous.
Technical forums work because people volunteer their time and expertise. When that gets met with dismissiveness and assumptions of bad faith, it breaks down. Whether your technical argument is right or wrong, the "not my problem" response to a simple request for courtesy says a lot about how you're approaching this conversation.
You're partly right that credentials alone don't prove anything. "I worked at Google" or "I'm a security engineer" shouldn't automatically win an argument.
But that's not what happened here. When tptacek mentioned his background, he also explained that static analysis tools have failed commercially for decades despite massive investment, and that LLM orchestration is the new variable. That's someone providing context for their technical claim.
You rejected the credentials and the explanation together, then labeled it all as "argument from authority." That's using a logic 101 concept to avoid engaging with what was actually said.
This part of your response is the most telling: "Maybe it works a lot and you folks are upset it's not working here."
You've decided that people disagreeing with you are trying to manipulate you with authority, and they're frustrated it's not landing. But there's a simpler explanation: they might just think you're wrong and are trying to explain why based on relevant experience.
Once you've framed every response as attempted manipulation rather than genuine disagreement, productive conversation becomes impossible. You're not really asking questions anymore. You're defending your initial position and treating help as opposition.
If you actually want to understand this rather than win the argument, try engaging with the core claim: SAST tools with human triage have been ineffective for 20+ years despite enormous investment. Now SAST with LLM orchestration appears to work. What does that tell you about what the LLM is contributing beyond simple filtering?
That's a real question that might lead somewhere interesting. It also acknowledges that people have spent their time trying to help you understand something, even when you've been prickly about it. "Not my problem" just shuts everything down. And yeah, in a volunteer discussion forum, that actually is your problem if you want people to keep engaging with you.
No, they haven't. Just read the thread.
> You're partly right that credentials alone don't prove anything.
I am totally right. Saying "believe me, I work on this" is lazy and a bad argument. There was simply no technical discussion to back that up.
> When tptacek mentioned his background, he also explained that static analysis tools have failed commercially for decades despite massive investment
I am not convinced that static analysis tools failed that hard. When I mentioned sanitizers, for example, he simply disappeared from the conversation and left that subject.
Also, suddenly, 22 bugs are found and there's a new holy grail in security analysis? You must understand that this is not enough.
> You've decided that people disagreeing with you are trying to manipulate you with authority
That's not a decision. An attempt to use credentials happened and it's there for anyone to see. It's blatant, I don't need to frame it.
> SAST tools with human triage have been ineffective for 20+ years despite enormous investment
I am not convinced that this is true. As I mentioned, sanitizers work really well at mitigating lots of security issues. They're an industry standard.
Also, I am not totally convinced that the LLM solution is that much better. It's fairly recent, only found a couple of bugs, and it still has much to prove before it becomes a valuable tool. Promising, but far from the holy grail you folks are implying it to be.
> if you want people to keep engaging with you
I want reasonable, no-nonsense people engaging with me. You seem to imply that your time somehow is more valuable than mine, that's me who owe you somehow. That is simply not true.
On the evidence: These LLM-assisted tools are quite new. Curl finding 22 potential issues is interesting but you're right that it's early days. Declaring this definitively transformative based on limited public evidence is probably premature.
But let's be clear about something else:
You've been consistently rude to people trying to engage with you. Multiple people wrote lengthy, substantive responses. You can scroll up and count the paragraphs. Saying "No, they haven't. Just read the thread" is one of the nicest ways you've engaged. You assume we are dishonest or you genuinely can't recognize when someone's making an effort.
When someone asks you to be more courteous, "it's really not my problem" is a dick move. Nobody said you owe anyone deference. The ask was simpler: don't call good-faith engagement manipulation or suppression or ranking. That's basic forum etiquette, not hierarchy.
And this: "You seem to imply that your time somehow is more valuable than mine, that's me who owe you somehow." Nobody implied that. Several people spent time explaining things. You've spent time questioning them. That's symmetrical. What's asymmetrical is that you keep framing their explanations as evasion or authority-wielding while treating your skepticism as pure rationality. That's exhausting to deal with.
Fuck man. I haven't got a paycheck in 2 years. Your time is worth more objectively. You make up reasons to infer we're actually saying "you're worthless", which then require your interlocutor to point of they couldn't have been, as they are objectively worse than you on whatever metric you mind-read they were comparing you on. Really sick behavior, even though I am sure it is unintentional and you really do think you're being put down like we're at the 5th grade lunch table. I've never had to roll over and show my belly and do "I'm unemployed!!11!" thing just to get someone to stop being a dick. I've had to do it twice so far.
On the actual technical question:
The narrower claim (which might be more defensible) is that SAST tools generate enormous amounts of output that requires expert triage, and that triage step has been the bottleneck. Humans don't scale to it; it's tedious and expensive. If LLMs can effectively automate that triage—not find new classes of bugs, but filter and prioritize what existing analyzers already flag—that could be valuable even if the underlying analysis is traditional.
Your architectural model (verbose analyzer → LLM triage) might be basically correct. The disagreement may just be about how significant that triage step is. You think it's 1% of the value. Others think the triage bottleneck was the whole reason these tools didn't work at scale.
That's a real technical question worth discussing. But it requires assuming people disagree with you because they actually disagree, not because they're trying to bamboozle you with credentials.
Whether it is important or not is highly subjective.
My statement is that _the quality is capped by the non-AI portion of the solution_, which is an objective statement. It means the solution should get better with a better static analyzer, but it probably won't get much better with a better model. That is a testable prediction that might reveal itself to be true or not. It's right there in the first comments I made.
> not because they're trying to bamboozle you with credentials
Let's not use credentials then!
But let's be clear about how we got here. You opened with "Something sounds fishy...I don't think they were [found by AI]." When challenged, you moved to "I concede LLM involvement but want to specify its role." Now you're at a specific testable hypothesis about quality caps. That's a lot of ground to cover while insisting everyone else has been evasive.
On your technical claim:
You might be wrong. Here's why the LLM could matter more than you think:
Static analyzers produce massive amounts of potential findings. The problem has never been "they can't detect anything"—it's that they detect too much, with too many false positives, requiring expert judgment to separate signal from noise. That triage step requires understanding code context across files, project architecture and conventions, whether a potential issue is reachable, whether existing mitigations make it irrelevant, and how severe it actually is
If LLMs can do that context synthesis effectively—and early evidence suggests they can—then the bottleneck shifts. Your prediction assumes the analyzer's initial detection is the limiting factor. The opposing view is that contextualized triage is the limiting factor, and LLMs are good at exactly that kind of synthesis.
That's testable. Run the same analyzer with human triage, basic filtering, and LLM triage. If you're right, they'll find the same bugs. If others are right, LLM triage will surface meaningful issues the other approaches miss.
On "there was simply no technical discussion":
This is flatly false. tptacek explained that SAST tools have been commercially ineffective for decades despite hundreds of millions in investment, that the triage bottleneck was the problem, and that LLM orchestration is the new variable. That's technical substance. You dismissed it, but it was there.
I described using GPT-3 to port color science code across multiple languages, explaining direct experience with AI-assisted development. That's concrete technical detail.
You can disagree with these points. But claiming they don't exist is either dishonest or you're not actually reading what people write.
On sanitizers:
You're using this as evidence that static analysis didn't fail, but sanitizers (AddressSanitizer, MemorySanitizer, etc.) are dynamic analysis—runtime instrumentation, not static analysis. They're not counterexamples to claims about SAST tools. The conversation moved on because your example was off-topic.
On "let's not use credentials":
Show me where someone did. Find me one comment where someone said "this is true because I worked at Google, full stop" without also providing technical explanation.
You can't, because it didn't happen. Every time credentials came up, they were context for a substantive technical point. I mentioned my background while explaining my direct experience. tptacek identified as a security professional while explaining the SAST triage problem. You've been fighting a phantom so you could righteously reject authority instead of engaging with the actual arguments being made.
On the pattern:
You've consistently reframed disagreement as suppression. People are "trying to make me stop describing how to achieve a similar quality system." They're "upset" their authority isn't working. They're being "evasive" without you ever specifying what's being evaded
This isn't skepticism. It is a reflexive defensiveness that treats every substantive response as an attack. It's made this conversation take 10x longer than necessary and turned it into arguments about the arguments instead of the actual technical question.
The bottom line:
You have a testable hypothesis about whether LLM triage is transformative or marginal. That's worth discussing. But you've been needlessly unpleasant, demonstrably wrong about what's in this thread, and you've burned a lot of goodwill from people who tried to engage you seriously.
If you want to talk about the technical question, I'm here. But stop pretending you've been stonewalled when multiple people have given you detailed responses you simply didn't like.
I presented that statement in my first comment. It is still there, unedited. I also pointed to it several times.
There was a choice to focus on the opinion-based stuff or the technical stuff. Other users pointed out that this thread was about learning how it works (they got it right), but you also ignored them.
> sanitizers (AddressSanitizer, MemorySanitizer, etc.) are dynamic analysis—runtime instrumentation, not static analysis.
Fair enough. They are traditional non-AI tools though. It's like you're trying to catch me on a technicality (I mislabelled something, but it doesn't actually interfere much in the point I was trying to get across).
---
Of course things escalated, and I was harsh on purpose. When someone starts with too much "I am this and that, I worked on this and that", you close doors, not open them.
> pretending you've been stonewalled
I don't think I was stonewalled or anything. As I previously mentioned, this is on a subthread of a comment that got flagged. Absolutely no one is reading this, it's an irrelevant conversation. If anything, I am extending a courtesy here of answering the guy who got his comment removed.
> If you want to talk about the technical question, I'm here.
I have zero need to talk about this.
I am not responsible for anyone that gets offended if I say that something is not as AI as it seems to be. It's obviously not a personal offense. If you took it as such, it's really not my problem.
Rejecting an argument by authority like "I'm an ex Googler!" or "I'm a security engineer" is also common sense. Maybe it works a lot and you folks are upset it's not working here, but that's just the way it is.