No idea if this can happen with what modern smartphone cameras do to photos. If "AI" is involved then I would expect such issues to be possible because of the basic nature of them being random generators, just like how LLMs hallucinate stuff all the time. Other "enhancement" approaches might not produce issues like this.
https://www.reddit.com/r/iphone/comments/1m5zsj7/ai_photo_ga...
https://www.reddit.com/r/iphone/comments/1jbcl1l/iphone_16_p...
https://www.reddit.com/r/iphone/comments/17bxcm8/iphone_15_n...
Yes it is. I've seen that happen in real-time with the built-in camera viewfinder (not even taking a photo) on my mid-range Samsung phone, when I zoomed in on a sign.
It only changed one letter, and it was more like a strange optical warping from one letter to a different one when I pointed the camera at a particular sign in a shop, but it was very surprising to see.
Not that that helps anyone who's affected, but that situation is more like if you'd have an .aip file, AI Photo storage format, where it invents details when you zoom in, and not a sensor (pipeline) issue
I, probably due to phrasing ambiguity in an old TheRegister article on the matter, had mistakenly remembered the temporary storage between scan and print of the copy mode to also had been affected.
As there were many situations where one would scan and destroy the original once offsite backup has run, while physical copies would/should often not entail destruction of the original, most of the overall damage/impact would be due to scanning anyways, not copying.
See https://www.runpulse.com/blog/why-llms-suck-at-ocr and its related HN discussion https://www.hackerneue.com/item?id=42966958
Pre-LLM approaches handle unintelligible source data differently. You'll more commonly see nonsense output for the unintelligible bits. In some cases the tool might be capable of recognizing low confidence and returning an error or other indicator of a possible miss.
IMO, that's a feature. The LLM approach makes up something that looks right but may not actually match the source data. These errors are far harder to detect and more likely to make it past human review.
The LLM approach does mean that you can often get a more "complete" output from a low quality data source vs pre-LLM approaches. And sometimes it might even be correct! But it will get it wrong other times.
Another failure condition I've experienced with LLM-based voice transcription that I didn't have pre-LLM - running down the wrong fork in the road. Sometimes the LLM approaches will get a word or two wrong...words with similar phonetics or multiple meanings, that kind of thing. It may then continue down the path this mistaken context has created, outputting additional words that do not align to the source data at all.