After a big rebranding from Facebook, Meta hosted its inaugural conference to unveil its latest Metaverse projects. The virtual Meta event ‘Inside the lab: Building for the metaverse with AI’ was hosted on February 23 at 10:30 p.m. (IST) and featured speakers from across the AI spectrum, with CEO Mark Zuckerberg delivering the opening and closing addresses.
The Metaverse is now a headlining topic, particularly in terms of its potential to revolutionize how we work, live, and play. The premise that the Metaverse will be a well-defined ideal of a technological future that will accomplish for society what the Internet did for the twenty-first century has already been pursued by tech pundits.
Even Zuckerberg is betting that the Metaverse will be the successor to the mobile Internet. While highlighting the difficulties in creating the Metaverse, Zuckerburg stated that AI is the key to unlocking many breakthroughs. Furthermore, because these worlds will be dynamic and constantly changing, AI needs to grasp the context and learn the same way humans do. Zuckerberg said the Metaverse would consist of immersive worlds the user can create and interact with, including the position in 3D space, body language, facial gestures, etc. “So, you experience it and move through it as if you are really there,” he added.
No Language Left Behind
At present we have numerous tools and apps that help us communicate in common languages like English, Spanish, and Mandarin with online tools and software. However, there are billions of people who are not able to access the Internet and its offerings in their native languages. While machine translation systems are catching up, the key problem is building such tools when there are fewer or almost no textual resources.
According to Zuckerberg, Meta AI is invested in developing language and machine translation technologies that will cover the majority of the world’s languages. And now, two more projects have been added to the mix. The first is No Language Left Behind, a new advanced AI model that can learn from languages with fewer samples to train from, which will enable expert-quality translations in hundreds of languages, from Asturian to Luganda to Urdu. This will be a huge milestone in NLP-based AI tools that focus on eliminating the language barrier in accessing the Internet.
Universal Speech Translation
He also stated that Meta’s research division is working on a universal speech translation system that will let users engage with AI more efficiently within the company’s digital realm. According to Zuckerberg, the main objective is to create a universal model that includes knowledge from all modalities and data collected by rich sensors. Zuckerberg said, “This will enable a vast scale of predictions, decisions, and generation as well as whole new architectures training methods and algorithms that can learn from a vast and diverse range of different inputs.”
Project CAIRaoke
Meta is focusing on AI research to allow users to have more natural interactions with voice assistants, revealed Zuckerberg, shedding light on how humans will connect with AI in the Metaverse. During the Meta AI: Inside the Lab event, meta introduced Project CAIRaoke, a new approach to conversational AI for chatting with chatbots and helpers. The project aims to use neural models to help chatbots better comprehend people when they talk in a conversational manner, enabling more fluid interaction between people and their gadgets using natural language processing.
People will be able to communicate naturally with their conversational assistants using models created with Project CAIRaoke, allowing them to go back to anything from previously in the discussion, shift topics entirely, or say things that require complicated, nuanced context. They’ll also be able to communicate with them in new ways, such as through gestures.
Traditional AI assistants rely on four distinct components: natural language understanding (NLU), dialog state tracking (DST), dialog policy (DP) management, and natural language generation (NLG). These disparate AI systems must then be integrated for hassle-free conversation, making them challenging to optimize, slow to adapt to new or unfamiliar jobs, and reliant on time-consuming annotated data sets. In addition, changes in one component might also break the others, requiring all following modules to be retrained, slowing them down. On the other hand, building a model using Project CAIRaoke will eliminate the reliance on upstream modules, thereby speeding up development and training, and allowing users to fine-tune other models with less time and data.
A demonstration of Project CAIRaoke technology showed a family using it to help make a stew, with the voice assistant warning that salt had already been added to the pot. The assistant also observed that they were low on salt and placed an order for more.
Meta also wants to incorporate Project CAIRaoke into devices with AR and VR, and it will be paired with Meta’s video-calling Portal device. Jérôme Pesenti, Meta’s VP of AI, said the company is currently limiting the answers of its new CAIRaoke-based assistant until it is assured that the assistant didn’t create foul words.
BuilderBot
At the Inside the Lab event, Meta also revealed it is also developing a virtual world AI called BuilderBot, which is based on the same conversational technology and allows individuals to talk their own worlds into existence literally. When BuilderBot is turned on, a user may walk into an empty 3D area, which is only populated with a horizon-spanning grid, and tell the bot what they want to appear in the world.
During the demonstration, Zuckerberg started engaging with a low-resolution virtual environment and began populating it with voice commands. With only verbal commands, Zuckerberg’s avatar transformed the entire 3D landscape into a park and eventually a beach. The BuilderBot then generates a picnic table, boom box, beverages, and other minor items in response to spoken commands.
However, it’s unclear if Builder Bot uses a library of predefined items to fulfill these tasks or if the AI creates them from scratch. If the latter is true, it will open new avenues for generative AI models.
Read More: Yann LeCun Proposes a 6 Modules Architecture of Common Sense to Achieve Autonomous Intelligence
TorchRec
Next, to showcase Meta’s commitment to AI transparency and open science, Meta unveiled TorchRec, a library for creating SOTA recommendation systems for the open-source PyTorch machine learning framework, at the Inside the Lab event. It’s available as PyTorch memory and includes primitives for common sparsity and parallelism, allowing researchers to create the same cutting-edge personalization that Facebook Newsfeed and Instagram Reels employ.
AI Learning Alliance
Meta also stated that it would extend free education initiatives focused on attracting more racial minorities to the field of technology, which experts believe is necessary to develop AI systems that are devoid of bias. Through its online learning platform Blueprint, Meta’s AI Learning Alliance aims to make machine learning curriculum available to everybody. The company is also building a consortium of teachers who will teach this curriculum at colleges with large populations of students from underrepresented groups.