The three scholars of artificial intelligence and deep learning, Yoshua Bengio, Yann LeCun, and Geoffery Hinton, announced that their paper Deep Learning For AI would be published officially in July 2021. The paper would be regarding neural networking, artificial intelligence and deep learning.
In 2018, the three researchers dug deep into how simple neural networks can learn a rich internal representation required to perform complex tasks such as recognizing objects or understanding language. The three explored the field to its maximum capacity and have put forth their thoughts and learnings in the paper, Deep Learning of AI.
Deep learning systems are doing well in the present for system 1 type that includes object recognition and understanding language. But not so well for system 2 tasks such as learning with little or no external supervision, using deep learning to perform tasks that humans and animals do by using a deliberate sequence of steps, and coping with test examples that arrive from a different distribution than the training examples. The paper describes a few ways which can make deep learning systems perform well with system 2 tasks. Not only that, the paper also briefs about the origins and recent advancements in deep learning and AI.
Read more: Canon Developed Artificial Intelligence Powered Smile Detecting Cameras
The paper has three major purposes: trying to point to the direction in actual progress of AI and making them learn like humans and animals, by getting machines to understand the reasoning and getting machines perceived more robustly and work precisely like humans and animals.
The paper also engages with how many believe that there are problems that neural networks cannot solve and they tend to resort back to the classical AI symbolic approach, but this work suggests otherwise, that those goals can be achieved by making the neural networks more structured via extending the network itself.
“Deep learning has a great future; it is only going to get bigger and better, but there is still a long way to go in terms of understanding how to make neural networks effective, and we expect to have many more ideas,” said Geoffery Hinton in a video describing the paper.