Meta AI releases Habitat-Matterport 3D Semantics (HM3D Semantics) v0.1, the largest dataset of annotated real-world spaces. 1000 high-resolution 3D scans of indoor settings, such as homes or businesses, make up the Habitat Matterport Dataset. The Habitat-Matterport 3D Semantics Dataset, an offshoot dataset built on top of the Matterport dataset, selects a small subset of scenes and annotates them with object labels and distinguishing hue textures.
The dataset annotations are done using more than 1700 natural objects mapped to several Matterport categories. Technically, each scene created by HM3D contains 646 raw objects mapped from 114 categories and is verified by approximately 30 annotators.
The HM3D-Sem dataset was announced last March. The released dataset appears to be an upgraded version with better annotations and higher object counts.
Read More: Google Announced Multiple Updates to its Cloud Security Offerings
HM3D-Semantics is generally available for academic and non-commercial research. The dataset is aimed at helping researchers train AI assistants and robots with FAIR’s Habitat Simulator.
Based on HM3D-Semantics v0.1 scenes, Meta AI is announcing the Habitat 2022 ObjectNav Challenge. The challenge will focus on object recognition and semantics. The Habitat-Matterport3D Semantics v0.1 2 dataset will be used with 120 scenes for train/val/test as 80/20/20. It will employ six object aim categories: chair, sofa, potted plant, bed, toilet, and television. Applicants must map the camera specification in simulation to the object’s actual location.