If you are a photography enthusiast, you must be aware of the hassles of shooting in low-light settings. Despite spending hours in post-processing, the photographs do end up having distracting noise. Now, Google promises an answer to such woes! Photographers can now successfully see in the dark thanks to an innovative new technology from Google Research that employs artificial intelligence to reduce image noise in dim settings.
The best part of this AI denoising feature is that using this results in causing a minimal loss in the quality of the photograph when compared to existing tools.
This new tool, known as RawNeRF, is a part of the MultiNeRF open source project. Google RawNeRF can specifically help photographers capture darker subjects. In a Cornell paper, Google researcher Ben Mildenhall explains that NeRF (Neural Radiance Fields) is a view synthesizer, a technology that can scan millions of images to recreate precise 3D renders.
It employs tone-mapped low dynamic range (LDR) photos as input, similar to most view synthesis techniques. These images have passed through a lossy camera pipeline that obfuscates the basic noise distribution of the raw sensor data and smooths out detail and highlights.
According to Mildenhall, NeRF was designed for daytime shooting. Hence it works best with well-lit photos and low noise levels. However, nighttime and low-light shooting presented challenges while shooting since they concealed features in shadow or made noise when the brightness was increased in post. Mildenhall and his team concluded that while denoising technologies can considerably reduce noise, they do so at the expense of image quality.
Mildenhall reveals that Google RawNeRF leverages a combination of pictures from various camera viewpoints to collectively denoise and reconstruct a scene. It can therefore be used to change the camera position and see the picture from various perspectives in addition to being a denoiser. Scenes are reconstructed in a linear HDR color space, which allows it to work out subtleties such as shifting exposure, tone mapping, and changing the focus.
In a video demonstration for NeRF in the Dark, which was first released in May 2022 but was mostly overlooked at the time, Mildenhall a smartphone photo of a candlelit table to demonstrate the capability of the new AI technology. Though the image is more detailed due to modest post-processing and brightness, it included a significant amount of sensor noise. With a cutting-edge deep denoiser, Mildenhall demonstrates how the image is left with unappealing distortions, but after using RawNeRF, the results are astounding, especially considering the image quality and lack of imperfections. Because the AI was trained on raw image data rather than JPEGs that had been edited afterward, it performed so well. As a result, RawNeRF is able to merge pictures from various camera angles in order to jointly denoise and reconstruct the scene.
RawNeRF provides an enticing glimpse of how AI could aid creative people in more accurately reflecting the reality around them, despite being in the research stage and not officially approved by Google (yet).