Preferences

Gaussian Splatting creates an "approximation" of a 3D scene (captured from a video) using hundreds of thousands (or even millions) of tiny gaussian clouds. Each gaussian might be as small as a couple of pixels, and all these 3D gaussians get projected onto the 2D image plane (fast in GPU) to realize a single image (i.e. a single pose of the video camera). These gaussians are in 3D, so they explicitly represent the scene geometry e.g. real physical surfaces, and an approximation of physical textures. When a camera blurs an image, the physical surface / object gets blurred across many pixels. But if you can reconstruct the 3D scene accurately, then you can re-project the 3D gaussians into 2D images that end up not blurry. Another way to view the OP is that this technique is a tweak to the "sharp images only" Gaussian Splatting work from last year to deal with blurry images.

The OP paper is cool but isn't alone, here's some concurrent work: https://github.com/SpectacularAI/3dgs-deblur

Also related from a couple years ago, using NeRF methods (another area of current 3D research) to denoise night images and recover HDR: https://bmild.github.io/rawnerf/ NeRF, like Gaussian Splatting, seeks to reconstruct the scene in 3D, and RawNeRF adapts the approach to deal with noisy images as well as large exposure variation.

In terms of Gaussian Splats vs GenAI, usually GenAI models have been trained on a prior of millions of images so that they can impute / inference some part of the 3D scene or some part of the input images. However Gaussian Splats (and NeRF) lack those priors.


This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal