It may seem simple to reach for a close object, but doing so involves a complex brain network that took humans millions of years to develop through evolution. Now, robots are developing the same ability to handle delicate or irregular objects with integrated sensors that rely on cutting-edge machine learning algorithms. However, this is a difficult process since the robot must first determine a number of probabilities and factors, including the friction, location, and placement of the object, trajectories, and its two fingers, before even making the motion.
The ‘dexterity’, which comes naturally to human beings, is a complex task for robots. In reality, robots depend on factors that are extrinsic to its arm, like external contact or dynamic motion of the arm, i.e., extrinsic dexterity.
Previous research on extrinsic dexterity focused on making careful assumptions about contacts that impose limitations on the design of the robot, its movements, and how the physical parameters can vary. Researchers at Carnegie Mellon University’s Robotics Institute recently created a reinforcement learning (RL)-based system to overcome these restrictions. The team describes the task of “Occluded Grasping” in their paper, which tries to grasp the object in initially occluded configurations — the robot must maneuver the object into a configuration from which these grasps can be achieved. To put it another way, they used RL to get beyond these constraints and effectively grasp objects of a wide range of sizes, weights, shapes, and surfaces.
The neural network was trained by the researchers using reinforcement learning. The system was instructed to try random actions to grasp an object, and favorable action sequences were rewarded. Therefore, the system eventually adopted the behavioral patterns that were the most successful or yielded the highest rewards (basically achieving the goal of reinforcement learning.)
Read More: MIT Ups the Robotics Domain With Self-Assembling Robots
Researchers tested their system in a simple robot with a pincer-like grip after first training it in a physics simulator. They had the robot attempt to grasp and lift objects placed in an open bin that were initially positioned so that it couldn’t pick them up.
Wenxuan Zhou, the study’s principal author, stated that at first, the team believed the robot might attempt to perform an action similar to how humans generally pick objects, such as scooping underneath the object. But, the robot, supported by a reinforcement learning system, chose an alternative but unexpected course of action. It pushed an object up against the wall with its top finger, levered it up with its bottom finger, and then grabbed it.
Zhou and her colleagues conducted experiments on a variety of objects, including cardboard boxes, plastic bottles, a toy purse, and a Cool Whip container, to see how well their grasping robot system worked. These objects vary in terms of weight, form, and degree of slipperiness. They discovered that their simple grippers had a 78 percent success rate in successfully capturing these objects.
Zhou believes that simple grippers are quite underrated with respect to their effectiveness in grasping objects. The team hopes that this research study will open up new possibilities in manipulation with a simple gripper. In the future, these simple gripper-based RL robots might find applications in warehousing or housekeeping to help people with organization.
The researchers presented their findings on December 18 at the Conference on Robot Learning in Auckland, New Zealand.