The humans missed that as well though, the security issues you point to. I don't think that's on the AI, ultimately, we humans are accountable to the work.
If you have a developer who can code and isn't just vibe coding blindly, then that is an extra layer of security, sure it isn't amazingly more secure, but anyone that codes has at least some sense to not write in wildly insecure code like an LLM would, regardless of if it was tricked by things mentioned in the article or not.
> Cloudflare apparently did something similar recently.
Sure, LLMs don't magically remove your ability to audit code. But the way they're currently being used, do they make the average dev more or less likely to introduce vulnerabilities?
By the way, a cursory look [0] revealed a number of security issues with that Cloudflare OAuth library. None directly exploitable, but not something you want in your most security-critical code either.
[0] https://neilmadden.blog/2025/06/06/a-look-at-cloudflares-ai-...