Meta AI is releasing Implicitron, a modular framework within the PyTorch3D library, to advance 3D neural representations. Implicitron will provide implementations and abstractions to render 3D components.
With rapid advances in neural representations, more windows on possibilities are opening up, leading to an unclear method of choice. This new research is based on a new computer vision technique that seamlessly integrates natural and virtual objects in augmented reality. After NeRF techniques came into the picture, over 50 variants of this method have been released for synthesizing views of complex scenes. It is yet in its infancy, with new variants surfacing frequently.
Implicitron makes it possible to evaluate combinations, variations, and modifications with a standard codebase without any 3D graphics expertise. The modular architecture enables people to use it as a user-friendly state-of-the-art method while extending NeRF with a trainable view. Meta has successfully created composable versions of several generic neural reconstruction components to create real-time photorealistic renderings.
Additionally, Meta has also curated additional components for experimentation and extensibility. They have included a plug-in system to allow user-specified implementation and flexible configuration. It also comes with a training class that utilizes PyTorch Lightning for launching new experiments.
Implicitron aspires to function as a cornerstone for research in the area of neural implicit representation and rendering, just like Meta’s Detectron2 has become the go-to framework for constructing and assessing object detection methods on a range of data sets.
Meta aims to provide users of the framework with a way to quickly install and import components from Implicitron into their projects without recompiling or copying the code.