This analogy assumes that there exists some relatively straightforward, secure, practical defense against superintelligence that I am arguing against, like protecting a website against SQL injection. I do not argue against such a defense if it exists - by all means research and present proposals. My comment is about the plausibility of AI domination. I don't think it is that plausible, so I have different views than others on how important it is that we restrict the development of AI.
If you have an intelligent adversary and the stakes of them succeeding are high, it is the defenders job to prove the system secure. Systems don't start off secure and become vulnerable - they start off vulnerable until proven secure.
So yes, its okay to say "the things we're doing to 'contain' ais are almost certainly inadequate" until shown otherwise.