The Scientists of Max Planck Institute for Intelligent Systems developed “Insight,” a robust and soft haptic sensor, which leverages the capabilities of computer vision and deep neural networks to estimate from where and how it is being touched or contacted. The sensor also determines the strength and magnitude of the applied forces.
The research work by MPI-IS is regarded as a massive step for making robots feel and sense their surroundings as precisely as people and animals. Imitating its natural predecessor, the haptic sensor is made to be extremely sensitive and durable.
By sensing the contact and friction of external objects, the sensor can easily identify the touch. The sensor is made of a soft shell that surrounds a lightweight stiff skeleton, which makes the respective sensor look like a thumb. The stiff skeleton supports the sensor structure in the same way that bones support the tissues of the finger.
Read more: DeepMind Trains AI to Regulate Nuclear Fusion in Tokamak Reactor
To provide sensitivity, robustness, and soft contact, the sensor shell consists of a single sheet of elastomer over-molded over a strong frame. Furthermore, the elastomer is combined with dark yet reflective metal flakes to create an opaque grayish finish that keeps any outside light out. A small 160-degree fish-eye camera placed within this finger-sized cap helps in recording beautiful images lighted by a ring of LEDs.
The sensor also has a nail-shaped zone that is comparatively thinner than the other parts, making the sensor more accurate in sensing external contacts. The scientists made an elastomer with a thickness of 1.2 mm for this super-sensitive zone rather than the 4 mm, which is utilized on the remainder of the finger sensor. In addition, the haptic fovea is built to sense even the smallest forces and intricate object forms.
The respective sensor works by sensing and predicting the external contact or touch. When any object comes in contact with the sensor’s shell (outer layer), the color pattern inside the sensor changes. The sensor’s camera continuously captures the pictures at a high rate and feeds the image data to a deep neural network. Even the tiniest change of light in each pixel is detected by the algorithm. The well-trained machine learning algorithm can map out precisely where the finger is striking or touching the shell and calculate the strength and magnitude of the applied force. The model also generates a force map, which gives a force vector for each point in the three-dimensional fingertip.