As for the bit about how limited it is, do you remember the Rowhammer attack? https://en.m.wikipedia.org/wiki/Row_hammer
This is exactly the kind of thing I’d worry about a super intelligence being able to discover about the hardware it’s on. If we’re dealing with something vastly more intelligent than us then I don’t think we’re capable of building a cell that can hold it.
I think you also have to consider that AI with superpowers is not going to materialize overnight. If superintelligent AI is on the horizon, the first such AI will be comparable to very capable humans (who do not have the ability to talk their way into nuclear launch codes or out of decades-long prison sentences at will). Energy costs will still be tremendous, and just keeping the system going will require enormous levels of human cooperation. The world will change a lot in that kind of scenario, and I don't know how reasonable it is to claim anything more than the observation of potential risks in a world so different from the one we know.
Is it possible that search ends up doing as much for persuasion as it does for chess, superintelligent AI happens relatively soon, and it doesn't have prohibitive energy costs such that escape is a realistic scenario? I suppose? Is any of that obvious or even likely? I wouldn't say so.
In terms of crossing an air gap, that really depends. For example, are you aware that researchers can pretty reliably figure out what someone typed just by the sound of the keys being pressed?
Or how about that team who developed a code analysis tool to detect errors, and they ran Tim-sort through it, and the tool said that Tim-sort had a bug where certain pathological inputs would cause it to crash? The researchers assumed their tool was incorrect because Tim-sort is so widely used (it’s the default sort algorithm in Python and Android, for example). But they decided to try it out to see what would happen, and sure enough, they could hard-crash the Python interpreter. No one realized this bug had been there the whole time.
Or various image codec bugs over the years that have allowed a device to be compromised just by viewing an image?
There are some weird bugs out there. Are we certain that there’s no way a computer could detect variances in the timings or voltages happening within it to act as a WiFi antenna or something like that? We’ve found some weird shit over the years! And a super-intelligence that’s vastly smarter than us is way more likely to find it than we are.
Basically, no, I don’t trust the air gap with a sufficiently advanced super intelligence. I think there are things we don’t know that we don’t know, and a super-intelligence would spot them way before we would. There are probably a hundred more Rowhammer attacks out there waiting to be discovered. Are we sure none of them could exchange data with a nearby device? I’m not.
AI that's as good as a persuasive human at persuasion is clearly impactful, but I certainly don't see it as self-evident that you can just keep drawing the line out until you end up with 200 IQ AI that is so easily able to manipulate the environment it's not worth elaborating how exactly a chatbot is supposed to manipulate the world through extremely limited interfaces with the outside world.