Preferences

notsylver parent
I digitised our family photos but a lot of them were damaged (shifted colours, spills, fingerprints on film, spots) that are difficult to correct for so many images. I've been waiting for image gen to catch up enough to be able to repair them all in bulk without changing details, especially faces. This looks very good at restoring images without altering details or adding them where they are missing, so it might finally be time.

Almondsetat
All of the defects you have listed can be automatically fixed by using a film scanner with ICE and a software that automatically performs the scan and the restoration like Vuescan. Feeding hundreds (thousands?) of photos to an experimental proprietary cloud AI that will give you back subpar compressed pictures with who knows how many strange artifacts seems unnecessary
notsylver OP
I scanned everything into 48-bit RAW and treat those as the originals, including the IR scan for ICE and a lower quality scan of the metadata. The problem is sharing them - important images I manually repair and export as JPEG which is time consuming (15-30 minutes per image, there are about 14000 total) so if its "generic family gathering picture #8228" I would rather let AI repair it, assuming it doesn't butcher faces and other important details. Until then I made a script that exports the raws with basic cropping and colour correction but it can't fix the colours which is the biggest issue.
wingworks
How did you get the 49bit and ICE data separately? Did you double scan everything?

I'm scanning my parents photos at the moment.

exe34
this reminds me of a joke we used to tell as kids when there was a new Photoshop version coming out - "this one will remove the cow from the picture and we'll finally see what great-grandpa looked like!"
wingworks
Vuescan is terrible. SilverFast has better defaults. But nothing beats the orig Nikon scan software when using ICE. It does a great job of removing dust, fingerprints etc Even when you zoom in. VS what iSRD does in SilverFast, which if you zoom in and compare the 2. iSRD kinda smooches/blurs the infrared defects whereas Nikon Scan clones the surrounding parts, which usually looks very good when zooming in.

Both Silverfast and Nikon Scan methods look great when zoomed out. I never tried Vuescan's infrared option. I just felt the positive colors it produced looks wrong/"dead".

bjackman
I don't really understand the point of this usecase. Like, can't you also imagine what the photos might look like without the damage? Same with AI upscaling in phone cameras... if I want a hypothetical idea of what something in the distance might look like, I can just... imagine it?

I think we will eventually have AI based tools that are just doing what a skilled human user would do in Photoshop, via tool-use. This would make sense to me. But just having AI generate a new image with imagined details just seems like waste of time.

bibabaloo
Why take photos at all if you can just imagine them?
bjackman
Well, that goes to the heart of my point. I take pictures because I value how literal they are. I enjoy the fact that they directly capture the arrangement of light in the moment I took them. That

So yeah, if I'm gonna then upscale them or "repair" them using generative AI, then it's a bit pointless to take them in the first place.

gretch
If you want 2 people to look at the same photo and share the same experience, you have to fix the photo.

If you leave to imagination, it's likely they each imagine something different.

w4yai
Not everyone has a great imagination.
Filligree
Read up on aphantasia.
Do you happen to know some software to repair/improve video files? I'm in the process of digitalizing a couple of Video 2000 and VHS casettes of childhood memories of my mom who start suffering from dementia. I have a pretty streamlined setup for digitalizing the videos but I'd like to improve the quality a bit.
nycdatasci
I've used products from topazlabs.com for the same problem and have generally been happy with them.
qingcharles
Topaz is probably the SOTA in video restoration, but it can definitely fuck shit up. Use carefully and sparingly and check all the output for weird AI glitches.
notsylver OP
I didn't do any videos, just pictures, but considering how little I found for pictures I doubt you'll find much
actionfromafar
VHSdecode if you want a rabbit hole.
Barbing
Hope it works well for you!

In my eyes, one specific example they show (“Prompt: Restore photo”) deeply AI-ifies the woman’s face. Sure it’ll improve over time of course.

notsylver OP
I tried a dozen or so images. For some it definitely failed (altering details, leaving damage behind, needing a second attempt to get a better result) but on others it did great. With a human in the loop approving the AI version or marking it for manual correction I think it would save a lot of time.

This is the first image I tried:

https://i.imgur.com/MXgthty.jpeg (before)

https://i.imgur.com/Y5lGcnx.png (after)

Sure, I could manually correct that quite easily and would do a better job, but that image is not important to us, it would just be nicer to have it than not.

I'll probably wait for the next version of this model before committing to doing it, but its exciting that we're almost there.

qingcharles
Being pragmatic, the after is a good restoration. There is nothing really lost (except some sharpness that could be put back). The main failing of AI is on faces because our brains are so hardwired to see any changes or weirdness. This is the sort of image that is perfect for AI because the subject's face is already occluded.
indigodaddy
Another question/concern for me: if I restore an old picture of my Gramma, will my Gramma (or a Gramma that looks strikingly similar) ever pop up on other people's "give me a random Gramma" prompts?
Barbing
It might show her for prompts of “show me the world’s best grandma” :)

On free tier, I’d essentially believe that to be the default behavior. In reality they might simply use your feedback and your text prompts instead. Certainly know free Google/OpenAI LLM usage entails prompts being used for research.

Edit: decent chance it would NOT directly integrate grandma into its training, but would try hard to use an offline model for any privacy concerns

danielbln
That time had arrived a few months ago already with Flux Kontext (https://bfl.ai/models/flux-kontext).
reaperducer
I've been waiting for image gen to catch up enough to be able to repair them all in bulk without changing details, especially faces.

I've been waiting for that, too. But I'm also not interesting in feeding my entire extended family's visual history into Google for it to monetize. It's wrong for me to violate their privacy that way, and also creepy to me.

Am I correct to worry that any pictures I send into this system will be used for "training?" Is my concern overblown, or should I keep waiting for AI on local hardware to get better?

Zopieux
You're looking for Flux Kontext, a model you can run yourself offline on a high end consumer GPU. Performance and accuracy are okay, not groundbreaking, but probably enough for many needs.

This item has no comments currently.