Neuroscientists recently showed that the brain’s storage scheme is more capable of storing information than the neural networks. The paper from neuroscientists of SISSA, in collaboration with Kavli Institute for Systems Neuroscience & Norway’s Centre for Neural Computation, featured in the prestigious Physical Reviews Letters.
The basic unit of neural networks are neurons that learn patterns by fine-tuning the connections among them. The stronger the connections, the lesser is the chance to overlook any pattern. Neural networks use the backpropagation algorithms to tune and optimize the connections during the training phase iteratively. In the end, the neurons recognize patterns by the mapping function they have approximated, i.e., network memory.
This procedure works well in a static setting where no new data is being ingested. In a continual environment, where the models learn new patterns across diverse tasks over extended periods like humans, the neural networks suffer from catastrophic forgetting. So, there must be something else that makes the brain much more powerful and efficient.
Also Read: VOneNets – Computer Vision meets Primate Vision
The answer is in the brain’s more straightforward approach: the link between neurons decides how the pattern changes. Scientists thought that the simpler process would permit fewer memories based on the fundamental assumption that neurons are binary units. But the researchers showed that the fewer memories are the result of using such an unrealistic assumption. They combined the brain’s storage scheme to change the connections with biologically plausible models for single neurons response and found that the hybrid performs at par and beyond AI algorithms.
The researchers pinpointed the role of introducing errors in the performance boost. Usually, when the brain retrieves a memory correctly, it will be identical to the original input-to-be-memorized memory or correlated to it. But the brain storage scheme retrieves memories that are not identical to the initial input.
The neurons that are barely active in memory retrieval and do not distinguish among the different memories stored within the same network are silenced. These freed neural resources are focused on those neurons that matter in an input to be memorized and lead to a higher memory capacity. It is believed that the recent findings shall seep into the field of continual learning and multitask learning to produce more robust neural models that can handle catastrophic forgetting.