Well, AI is part of the field now, so... no, we don't anymore.
There's nothing "careless" about AI. The fact that there's no foolproof way to distinguish instruction tokens from data tokens is not careless, it's a fundamental epistemological constraint that human communication suffers from as well.
Saying that "software engineers figured out these things decades ago" is deep hubris based on false assumptions.
Repeat that over to yourself again, slowly.
> it's a fundamental epistemological constraint that human communication suffers from as well
Which is why reliability and security in many areas increased when those areas used computers to automate previously-human processes. The benefit of computer automation isn’t just in speed: the fact that computer behavior can easily be made deterministically repeatable and predictable is huge as well. AI fundamentally does not have that property.
Sure, cosmic rays and network errors can compromise non-AI computer determinism. But if you think that means AI and non-AI systems are qualitatively the same, I have a bridge to sell you.
> Saying that "software engineers figured out these things decades ago" is deep hubris
They did, though. We know how to both increase the likelihood of secure outcomes (best practices and such), and also how to guarantee a secure behavior. For example: using a SQL driver to distinguish between instruction and data tokens is, indeed, a foolproof process (not talking about injection in query creation here, but how queries are sent with data/binds).
People don’t always do security well, yes, but they don’t always put out their campfires either. That doesn’t mean that we are not very sure that putting out a campfire is guaranteed to prevent that fire burning the forest down. We know how to prevent this stuff, fully, in most non-AI computation.
> Repeat that over to yourself again, slowly.
Try using less snark.
And if you have a fundamental breakthrough in AI that gets around this, and demonstrates how "careless" AI researchers have been in overlooking it, then please share.
My point is that the fact that it is not solved makes the use of AI tools a careless choice in situations which benefit from non-AI systems which can distinguish instructions from data, behave deterministically, and so on.
its true, when engineers fail in this, its called a mistake, and mistakes have consequences unfortunately. If you want to avoid responsibility for mistakes, then llms are the way to go.
Well this is what happens when a new industry attempts to reinvent poor standards and ignores security best practices just to rush out "AI products" for the sake of it.
We have already seen how (flawed) standards like MCPs were hacked immediately from the start and the approaches developers took to "secure" them with somewhat "better prompting" which is just laughable. The worst part of all of this was almost everyone in the AI industry not questioning the security ramifications behind MCP servers having direct access to databases which is a disaster waiting to happen.
Just because you can doesn't mean you should and we are seeing how hundreds of AI products are getting breached because of this carelessness in security, even before I mentioned if the product was "vibe coded" or not.
Uhhh, no, we actually don't. Not when it comes to people anyway. The industry spends countless millions on trainings that more and more seem useless.
We've even had extremely competent and highly trained people fall for basic phishing (some in the recent few weeks). There was even a highly credentialed security researcher that fell for one on youtube.
Also, there’s a difference between “know how to be secure” and “actually practice what is known”. You’re right that non-AI security often fails at the latter, but the industry has a pretty good grasp on how to secure computer systems.
AI systems do not have a practical answer to “how to be secure” yet.
Software engineers figured out these things decades ago. As a field, we already know how to do security. It's just difficult and incompatible with the careless mindset of AI products.