Generally it's probably wise to be skeptical of anything that appears to get around the diffraction limit.
You’re right that it might fail on noise with resolution fine enough to break assumptions from the NN training set. But that’s not a super common application for cameras, and traditional cameras have their own limitations.
Not saying we shouldn’t be skeptical, just that there is a plausible mechanism here.
Multilevel fractal noise specifically would give an indication of how fine you can go.
I agree that measuring against such a test would be interesting, but I'm not sure it's possible or desirable for any camera tech to produce an objectively "true" pixel by pixel value. This new approach may fail/cheat in different ways, which is interesting but not disqualifying to me.
> Then, a physics-based neural network was used to process the images captured by the meta-optics camera. Because the neural network was trained on metasurface physics, it can remove aberrations produced by the camera.