A new “human-like” artificial intelligence model I-JEPA, according to Meta Platforms, will be made available to researchers that will be able to analyze and complete incomplete images more precisely than current models.
Instead of focusing exclusively on surrounding pixels like previous generative AI models, the model, called I-JEPA, fills in empty portions of images using background knowledge about the outside world, the company claimed.
According to Meta, this method integrates the kind of thinking that prominent AI researcher Yann LeCun has argued is more like that of humans and helps the technology avoid mistakes that are typical of AI-generated images, like hands with additional fingers.
I-JEPA allows for a wide range of applications without requiring a lot of fine-tuning. Within 72 hours, Meta AI had successfully trained a visual transformer model with 632M parameters using 16 A100 GPUs.
With just 12 labeled instances per class, this model achieves state-of-the-art performance for low-shot classification on ImageNet. When trained with an equal dataset size, alternative algorithms frequently use two to ten times the GPU-hours and produce more error rates.
The world model is a new architecture that Yann LeCun, chief AI scientist at Meta, developed last year to solve the major limits that modern AI systems confront. LeCun envisions the creation of machines with the ability to quickly build up internal models of the dynamics of the outside world, enabling them to learn quickly, plan ahead for complex tasks, and quickly adjust to changing conditions.