Preferences

As a software security person, I don't think the security objections to LLMs are going to pan out. I think LLMs are going to be a strong net positive for security:

* The tooling and integration stuff people complain about now ("the S in MCP") isn't really load-bearing yet, and a cottage industry of professional services and product work will go into giving it the same overcomplicated IAM guardrails everything else has; today, though, you just do security at a higher or lower level.

* LLM code generation is better at implementing rote best-practices and isn't incentivized to take shortcuts (in fact, it has some of the opposite incentives, to the consternation of programmers like me who prize DRY-ness). These shortcuts are where most security bugs live.

* LLMs can analyze code far faster than any human can, and vulnerabilities that can be discovered through pure pattern matching --- which is most vulnerabilities --- will be easy pickings. We've already had a post here with someone using o4 to find new remote kernel vulnerabilities, and that's a level of vuln research that is way, way more hardcore than what line-of-business software ordinarily sees.

* LLMs enable instrumentation and tooling that were cost-prohibitive previously: model checking, semantic grepping, static analysis. These tools all exist and work today, but very few projects seriously use them because keeping all the specs and definitions up to date and resolving all the warnings is too much time for not enough payoff. LLMs don't have that problem.

LLM-generated code (and LLM tooling) will inevitably create security vulnerabilities. We have not invented a way to create bug-free code; would have been big if true! Opponents of industry LLM use will point to these vulnerabilities and go "see, told you so". But each year we continue using these tools, I think the security argument is going to look weaker and weaker. If I had to make a bet, I'd say it ceases being colorable within 3 years.


threetonesun
I'm not terribly worried about code generated security vulnerabilities, but point 3 feels like a cat and mouse game that most companies won't have the resources to stay on top of, so they'll have to outsource it to one of the existing cloud or AI providers. Maybe that's a reality even without AI but it feels like we're heading towards full on extortion from about 4 major companies.

Also I don't think you covered my biggest concern with LLM security, a company making an Amazon basics version of your business model and claiming "AI did it". I'm 50/50 on that one though, it's also possible everyone things with AI you can go full NIH syndrome and take back all the software that we've handed off to various SAAS providers.

whatshisface
There are also non-llm advances in testing related to AI, like RL fuzzers.

This item has no comments currently.