The same way we can build "muscle memory" to delegate simple autonomous tasks, a super intelligence might be able to dynamically delegate to human level (or greater) level sub intelligences to vigilantly watch everything it needs to.
One of the most intuitive pathway to ASI is that AGI eventually gets incredibly good at improving AGI. And a system like this would be able to craft and direct stripped down AI subsystems.
On average, today's systems are much more secure than those from year 2005. Because the known vulns from those days got patched, and methodologies improved enough that they weren't replaced by newer vulns 1:1.
This is what allows defenders to keep up with the attackers long term. My concern is that AGI is the kind of thing that may result in no "long term".
Defender needs to get everything right, attacker needs to get one thing right.