Generally, deep neural networks (DNNs) are trained using the closed-world assumption, which assumes that the test and training data distributions are similar. But when used in real-world tasks, this assumption does not hold true, resulting in a significant drop in performance. While these AI models may sometimes match or even outperform humans, there are still issues with recognition accuracy when contextual circumstances like lighting and perspective alter dramatically from those in the training datasets.
Though this performance loss is acceptable for applications like AI recommendation, it can lead to fatal outcomes if deployed in healthcare. Deep learning systems must be able to discriminate between data that is aberrant or considerably different from that used in training in order to be deployed successfully. When feasible, an ideal AI system should recognize Out-of-Distribution (OOD) data that deviates from the original training data without human assistance.
This inspired Fujitsu Limited and the Center for Brains, Minds and Machines (CBMM) to make collaborative progress in understanding AI principles enabling recognition of OOD data with high accuracy by drawing inspiration from the cognitive characteristics of humans and the structure of the brain. The Center for Brains, Minds, and Machines (CBMM) is a multi-institutional NSF Science and Technology Center headquartered at the Massachusetts Institute of Technology (MIT). It is committed to the study of intelligence. In other words, it focuses on how the brain produces intelligent behavior and how we might be able to reproduce intelligence in machines.
At the NeurIPS 2021 (Conference on Neural Information Processing Systems), the team will present highlights of their research paper, demonstrating advancements in AI model accuracy. According to the group’s paper, they developed an AI model that leverages the process of diving deep neural networks to enhance accuracy, which was ranked as the most accurate in an evaluation assessing image recognition accuracy against the “CLEVR-CoGenT” benchmark.
The data distribution in real-world activities generally drifts with time, and tracking a developing data distribution is expensive. As a result, OOD identification is critical in preventing AI systems from generating predictions that are incorrect.
“There is a significant gap between DNNs and humans when evaluated in out-of-distribution conditions, which severely compromises AI applications, especially in terms of their safety and fairness,” said Dr. Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences at MIT and Director of the CBMM. Dr. Poggio also adds that this neuroscience-inspired research may lead to novel technologies capable of overcoming dataset bias. “The results obtained so far in this research program are a good step in this direction.”
The study’s outcomes show that the human brain can accurately record and classify visual information, even when the forms and colors of the things we experience change. The novel method creates a one-of-a-kind index depending on how neurons see an item and how the deep neural network classifies the input photos. The model encourages users to grow their index in order to enhance their ability to recognize OOD example items.
It was previously thought that training the deep neural networks as a single module without dividing it up was the best way to construct an AI model with high recognition accuracy. Researchers at Fujitsu and CBMM have effectively achieved greater recognition accuracy by separating the deep neural network into different modules based on the newly generated index’s forms, colors, and other aspects of the objects.
Fujitsu and CBMM intend to improve the findings to create an AI capable of making human-like flexible decisions, with the goal of using it in fields such as manufacturing and medical care.