No, but model training technology is out in the open, so it will continue to be possible to train models and build model toolchains that just don't incorporate watermarking at all, which is what any motivated actor seeking to mislead will do; the only thing watermarking will do is train people to accept its absence as a sign of reliability, increasing the effectiveness of fakes by motivated bad actors.
Always has been so far. You add noise until the signal gets swamped. In order to remain imperceptible it's a tiny signal, so it's easy to swamp.
It may be easier if you have an oracle on your end to say "yes, this image has/does not have the watermark," which could be the case for some proposed implementations of an AI watermark. (Often the use-case for digital watermarks assumes that the watermarker keeps the evaluation tool secret - this lets them find, e.g, people who leak early screenings of movies.)
Plus, any service good at reverse-image search (like Google) can basically apply that to determine whether they generated it.
There will always be a way to defeat anything, but I don't see why this won't work for like 90% of cases.