Preferences

I'm a software security person! This is not irrelevant to me.

In summary: the existing program analysis tooling in this space has been ineffective for decades, despite hundreds of millions of dollars invested in the tooling. If it is effective now, that strongly indicates that the LLM component of it isn't irrelevant; nothing else in the field has changed.

Note that everybody in this story concedes the LLM involvement. The only person who isn't is you, and you're not actually involved. (I'm not either, but I'm agreeing with --- checks again --- everybody involved in the story).


alganet
I concede the LLM involvement. But I want to be more specific in the description of the role it plays in the solution.

If it is a central role, then there is nothing to loose from describing it better. That's why this feels so strange. You disagree with me, but you don't present an arrangement in which the LLM plays a role different to what I described. In fact, no one here did. It's like you're not disagreeing with me, but trying to make me stop describing how to achieve a similar quality system out of free pieces.

refulgentis
Motte/bailey[1].

Also, somehow, you keep coming back to this uninteresting conversation where no one offers you anything new.

I recommending being kinder to people who offer their time. Even when we disagree or are having a rollicking discussion, there's a fundamental respect we should have for each other, if begrudging.

[1] Where you are: "[it seems you are] trying to make me stop describing how to achieve a similar quality system out of free pieces."

Where you started: "Do you believe AI is at the core of these security analyzers? If so, why the personal story blogpost? You can just explain me in technical terms why is that so.

Claiming to work for Google does not work as an authority card for me, you still have to deliver a solid argument.

Look, AI is great for many things, but to me these products sounds like chocolate that is actually just 1% real chocolate. Delicious, but 99% not chocolate."

alganet
Irrelevant. I actually started by describing the system, which was my first comment on the post.

I am not responsible for anyone that gets offended if I say that something is not as AI as it seems to be. It's obviously not a personal offense. If you took it as such, it's really not my problem.

Rejecting an argument by authority like "I'm an ex Googler!" or "I'm a security engineer" is also common sense. Maybe it works a lot and you folks are upset it's not working here, but that's just the way it is.

refulgentis
Nobody claimed to be "offended" by your technical skepticism. The idea is simpler: be kind to people who are taking time to engage with you, and for you, not for us.

Several people have written lengthy, detailed responses to your questions. They've provided technical context, domain experience, and specific explanations. Your response pattern has been to suggest they're being evasive, that they're trying to suppress your ideas, or that they're protecting commercial interests. And now "it's really not my problem" when someone asks you to be more courteous.

Technical forums work because people volunteer their time and expertise. When that gets met with dismissiveness and assumptions of bad faith, it breaks down. Whether your technical argument is right or wrong, the "not my problem" response to a simple request for courtesy says a lot about how you're approaching this conversation.

You're partly right that credentials alone don't prove anything. "I worked at Google" or "I'm a security engineer" shouldn't automatically win an argument.

But that's not what happened here. When tptacek mentioned his background, he also explained that static analysis tools have failed commercially for decades despite massive investment, and that LLM orchestration is the new variable. That's someone providing context for their technical claim.

You rejected the credentials and the explanation together, then labeled it all as "argument from authority." That's using a logic 101 concept to avoid engaging with what was actually said.

This part of your response is the most telling: "Maybe it works a lot and you folks are upset it's not working here."

You've decided that people disagreeing with you are trying to manipulate you with authority, and they're frustrated it's not landing. But there's a simpler explanation: they might just think you're wrong and are trying to explain why based on relevant experience.

Once you've framed every response as attempted manipulation rather than genuine disagreement, productive conversation becomes impossible. You're not really asking questions anymore. You're defending your initial position and treating help as opposition.

If you actually want to understand this rather than win the argument, try engaging with the core claim: SAST tools with human triage have been ineffective for 20+ years despite enormous investment. Now SAST with LLM orchestration appears to work. What does that tell you about what the LLM is contributing beyond simple filtering?

That's a real question that might lead somewhere interesting. It also acknowledges that people have spent their time trying to help you understand something, even when you've been prickly about it. "Not my problem" just shuts everything down. And yeah, in a volunteer discussion forum, that actually is your problem if you want people to keep engaging with you.

alganet
> Several people have written lengthy, detailed responses to your questions.

No, they haven't. Just read the thread.

> You're partly right that credentials alone don't prove anything.

I am totally right. Saying "believe me, I work on this" is lazy and a bad argument. There was simply no technical discussion to back that up.

> When tptacek mentioned his background, he also explained that static analysis tools have failed commercially for decades despite massive investment

I am not convinced that static analysis tools failed that hard. When I mentioned sanitizers, for example, he simply disappeared from the conversation and left that subject.

Also, suddenly, 22 bugs are found and there's a new holy grail in security analysis? You must understand that this is not enough.

> You've decided that people disagreeing with you are trying to manipulate you with authority

That's not a decision. An attempt to use credentials happened and it's there for anyone to see. It's blatant, I don't need to frame it.

> SAST tools with human triage have been ineffective for 20+ years despite enormous investment

I am not convinced that this is true. As I mentioned, sanitizers work really well at mitigating lots of security issues. They're an industry standard.

Also, I am not totally convinced that the LLM solution is that much better. It's fairly recent, only found a couple of bugs, and it still has much to prove before it becomes a valuable tool. Promising, but far from the holy grail you folks are implying it to be.

> if you want people to keep engaging with you

I want reasonable, no-nonsense people engaging with me. You seem to imply that your time somehow is more valuable than mine, that's me who owe you somehow. That is simply not true.

This item has no comments currently.