Tuesday, November 29, 2022
adspot_img
HomeNewsMIT Researchers Use Sound to Model Physical Spaces with a New Machine-Learning...

MIT Researchers Use Sound to Model Physical Spaces with a New Machine-Learning Model

The ML model can simulate sound waves reaching a listener at any point in the room.

Researchers at the MIT-IBM Watson AI Lab and MIT are experimenting with spatial acoustic information. They plan to use sound to model physical spaces with a new machine-learning model. The ML model can simulate how sound will propagate within a room and how listeners perceive it at different locations. 

The researchers used a model similar to the Implicit Neural representation model (used to generate smooth 3D reconstructions) and captured how sound waves travel through space. The system will model spatial acoustics and then learn the underlying geometry of the space. With this information, the model builds visual renderings of the space, similar to how humans use sounds to estimate their physical environment.

Until now, researchers have only modeled vision using the property of photometric consistency. But with sound, this property is inconsistent as changing locations also changes how sound is perceived. 

Read More: Researchers At IIIT Allahabad Propose a Deep Learning Model to Generate Compressed Images from Text

To overcome such limitations in acoustic modeling, the researchers focused on two properties: sound’s reciprocal nature and the impact of geometric objects. To incorporate these, the model uses Neural Acoustic Fields (NAFs). NAFs are grid-based neural networks that capture architectural features of space. 

Researchers can provide the NAF with visual data about a scene and a few spectrograms that illustrate how audio might sound when the emitter and listener are situated at specific points around the room.

The new technique of mapping physical spaces using sound will open opportunities for “immersive multimodal experiences in metaverse applications.” For more detailed information, refer to the research paper “Learning Neural Acoustic Fields.”

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our Telegram and WhatsApp group to be a part of an engaging community.

Disha Chopra
Disha Chopra
Disha Chopra is a content enthusiast! She is an Economics graduate pursuing her PG in the same field along with Data Sciences. Disha enjoys the ever-demanding world of content and the flexibility that comes with it. She can be found listening to music or simply asleep when not working!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img
spot_img

Most Popular