I’m also wondering if the opposite will be true. That people might claim something is AI generated to discredit it?
I’m more bullish on cryptographic receipts than on AI detectors. Capture signing (C2PA) plus an identity bind could give verifiable origin. The hard parts, in my view, are adoption and platform plumbing.
If we have a trust worthy way to verify proof-of-human made content than anything missing those creds would be red flags.
https://iptc.org/news/googles-pixel-10-phone-supports-c2pa-u...
SynthID claims to be designed to persist through several methods of modification. I suspect such attacks you mention will happen, but by those with deep pockets. Like a nation-state actor with access to models that don't produce watermarks.
But these new amazing AI image generators lets you just say "It wasn't me, it is an AI fake". Long term they will seriously devalue blackmail material.
I read a scifi novel where they invented a wormhole that only light could pass through but it could be used as a camera that could go anywhere and eventually anytime and there was absolutely no way to block it. So some people adapted to this fact by not wearing clothes anymore.
Don't know why you're being downvoted. That is the logical conclusion.
Although, there's also a chance that those "blackmail gangs" never materialize. After all, you could already ten years ago pay cheap labor to create reasonably good fake images using Photoshop.
And further, I can imagine some person actually having such footage of themselves being threatened to be released, then using the former narrative as a cover story were it to be released. Is there anything preventing AI generated images, video, etc from being always detectible by software that can intuit if something is AI? what if random noise is added, would the "Is AI" signal persist just as much as the indication to human that the footage seems real?