Saturday, November 27, 2021
HomeData ScienceAll About Waymo Driver: Google Autonomous Driving Technology

All About Waymo Driver: Google Autonomous Driving Technology

In the past decade, autonomous driving has progressed from ‘maybe possible’ to ‘now commercially available.’ Waymo, the company that emerged from Google’s self-driving car project, officially started its commercial self-driving car service in the suburbs of Phoenix in December 2018. At first, the program was available only to a few hundred vetted riders where human safety operators were always behind the wheel. However, in the past four years, Waymo has slowly opened the program to public members. Additionally, it has begun to run Robo taxis without drivers inside. 

In 2009, Google began the self-driving car project, and in 2016, Alphabet bought Waymo, an autonomous driving technology company. Since then, Google’s self-driving project became Waymo. Waymo provides fully autonomous cars and has been dubbed ‘the World’s Most Experienced Driver’ by Alphabet AI. 

In this article, we understand how Waymo leverages artificial intelligence to create its world-dominating self-driving technology. 

Waymo’s Tech

The Waymo self-driving system has two essential parts: a highly sophisticated custom suite of sensors developed explicitly for fully autonomous operations and state-of-the-art software to make sense of the information.

Lidar is Waymo Driver’s most powerful sensor that paints a 3D picture of surroundings, allowing the system to measure the size and distance of objects around our vehicle. Lidar can measure the distance whether the things are 300 meters away or up close. Lidar sensors allow Waymo’s technology to see the world in incredible detail and identify objects in the sun on the brightest days and moonless nights. 

Waymo has adopted the data centers of Google, TPUs, and the TensorFlow ecosystem, for training its neural networks. The rigorous training cycles and simulation testing allows the company to enhance its ML and autonomous system.

The platform also leverages AI to simulate sensor data gathered through its self-driving vehicles. In a recent paper, Waymo researchers introduced SurfelGAN, a technique that uses texture-mapped surface elements for reconstructing scenes and camera viewpoints to handle positions and orientations. 

Advanced Sensors

Waymo claims to have built the most advanced sensor systems that have been trained with over 20 million autonomously driven miles. They have improved the current system over five generations of development. Its 5th generation Driver consists of radar, Lidar, and cameras to see 360 degrees around the vehicle. 

In Waymo’s self-driving automobiles, a family of Lidar sensors uses light waves to paint rich 3D pictures, known as point clouds, allowing the Waymo Driver to see the world in incredible detail. Point clouds from Lidar can capture the distance and size of objects, allowing the software to spot pedestrians walking on a moonless night an entire city block away. 

Second, Waymo vehicles are also equipped with a range of cameras that provide the Waymo Driver with different road perspectives. These cameras can capture long-range objects and help the rest of our system by adding various sources of information, providing a deeper understanding of its environment to the Waymo Driver. 

Radar System

Waymo uses one of the world’s first radar imaging systems for fully autonomous vehicles that complement its cameras and Lidars. This radar can instantly perceive a pedestrian’s speed and trajectory even in challenging weather conditions, such as fog, snow, and rain, providing the Waymo Driver with unprecedented resolution, range, and field of view for safe driving. 

Waymo sensors produce various types of data, including fine-grained Lidar point clouds, video footage, and radar imagery over different ranges and fields of view. The diversity of sensors allows a sensor fusion technique that improves detections and characterizations of objects. 

Sensor fusion technology allows Waymo to amplify the advantages of its sensor. For example, Lidar excels at providing depth information and detecting the 3D shape of objects. At the same time, cameras can pick out visual features such as a temporary road sign and the color of traffic signals. Meanwhile, Waymo’s radar is highly effective in bad weather and can track moving objects like animals running out of a bush and onto the road.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our Telegram and WhatsApp group to be a part of an engaging community.

Osheen Jain
Osheen is an experienced content writer who is interested in the intersection of robotics and Embodied Cognition. She has a PGD in Cognitive Science, a Master's in Philosophy, and enjoys reading immensely.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular