Researchers from NVIDIA have developed a novel artificial intelligence (AI)-powered tool that can be used to turn 2D images into 3D scenes.
It is the opposite of the functioning of a mainstream camera which captures 2D images of the 3D real world.
NVIDIA showcased the unique model during its recently held Spring 2022 GPU Technology Conference (GTC).
According to NVIDIA, Inverse Rendering is a technique that uses artificial intelligence to mimic how light interacts in the real world, allowing researchers to reconstruct a 3D scene from a collection of 2D photographs collected from various angles.
NVIDIA researchers have developed a highly capable ultra-fast neural network training and rendering system that can perform inverse rendering in a matter of seconds. This method was used by NVIDIA to develop neural radiance fields, or NeRF, a popular new technology.
Vice President of Graphic Research at NVIDIA, David Luebke, said, “If traditional 3D representations like polygonal meshes are akin to vector images, NeRFs are like bitmap images: they densely capture the way light radiates from an object or within a scene.”
He further added that instant NeRF could be as essential to 3D as digital cameras and JPEG compression was to 2D photography, vastly enhancing the speed, convenience, and reach of 3D capture and sharing.
The company claims that this new artificial intelligence-powered solution is the quickest NeRF technology to date, with speedups of up to 1,000x in some circumstances. The model can render the final 3D scene in just a few seconds after a short duration of training on a few dozen still photographs.
Apart from the model, multiple other technologies such as new GPUs, CPU, Autonomous driving tech, and more were unveiled during the GTC event.