I'm scanning my parents photos at the moment.
Both Silverfast and Nikon Scan methods look great when zoomed out. I never tried Vuescan's infrared option. I just felt the positive colors it produced looks wrong/"dead".
I think we will eventually have AI based tools that are just doing what a skilled human user would do in Photoshop, via tool-use. This would make sense to me. But just having AI generate a new image with imagined details just seems like waste of time.
So yeah, if I'm gonna then upscale them or "repair" them using generative AI, then it's a bit pointless to take them in the first place.
If you leave to imagination, it's likely they each imagine something different.
In my eyes, one specific example they show (“Prompt: Restore photo”) deeply AI-ifies the woman’s face. Sure it’ll improve over time of course.
This is the first image I tried:
https://i.imgur.com/MXgthty.jpeg (before)
https://i.imgur.com/Y5lGcnx.png (after)
Sure, I could manually correct that quite easily and would do a better job, but that image is not important to us, it would just be nicer to have it than not.
I'll probably wait for the next version of this model before committing to doing it, but its exciting that we're almost there.
On free tier, I’d essentially believe that to be the default behavior. In reality they might simply use your feedback and your text prompts instead. Certainly know free Google/OpenAI LLM usage entails prompts being used for research.
Edit: decent chance it would NOT directly integrate grandma into its training, but would try hard to use an offline model for any privacy concerns
I've been waiting for that, too. But I'm also not interesting in feeding my entire extended family's visual history into Google for it to monetize. It's wrong for me to violate their privacy that way, and also creepy to me.
Am I correct to worry that any pictures I send into this system will be used for "training?" Is my concern overblown, or should I keep waiting for AI on local hardware to get better?