Sunday, November 17, 2024
ad
HomeData ScienceTesla AI Day 2021 Announcements: What to Expect from the AV Leader

Tesla AI Day 2021 Announcements: What to Expect from the AV Leader

From D1 powered Dojo supercomputer to Tesla Bot, what are the most awe-inspiring Announcements from the Annual Tesla Event?

After successfully hosting Autonomy Day in 2019 and Battery Day in 2020, this year on August 19 Tesla hosted AI Day. Tesla AI Day, which was broadcast live on YouTube for the common public to watch, was primarily a virtual event with actual invitations issued to a small group of attendees. The event began with 45 minutes of industrial music from The Matrix soundtrack.

1. Expanding processing capacity with D1 and Dojo

Tesla senior director of Autopilot hardware and lead of Project Dojo, Ganesh Venkataramanan introduced the company’s computer chip D1, which was created and produced in-house and is used to power Dojo, Tesla’s supercomputer.

Tesla has been promising the creation of an in-house supercomputer specialized for neural net video training for years. This is because the carmaker was unhappy with the current hardware alternatives for training its computer vision neural nets and felt it could do better internally.

The D1 chip is the result of TSMC’s manufacturing efforts, which took place on a 7nm semiconductor node. The device has a die size of 645 square millimeters and has over 50 billion transistors. Tesla created a network of functional units (FUs) that are linked to form a single enormous chip. Each FU is equipped with a 64-bit CPU that runs on a customized ISA and is optimized for transposes, gathers, broadcasts, and link traversals. The CPU is a superscalar implementation with four broad scalar and two wide vector pipelines. 

Each FU has its own scratchpad SRAM memory of 1.25MB. The FU has 512 GB/s bandwidth in either direction in the mesh and can perform one TeraFLOP of BF16 (bfloat16 or brain floating point) and a new format called CFP8 (“configurable FP8”) computing and 64 GigaFLOPs of FP32 calculation. The mesh is designed to traverse the FUs in a single clock cycle, resulting in decreased latencies and improved performance.

This AI chip can output as much as 362 TeraFLOPs at FP16/CFP8 precision or about 22.6 TeraFLOPs of single-precision FP32 tasks. Each of D1’s lateral edges – all four of which have connections – has 4TBps off-chip bandwidth, allowing it to connect to and grow with other D1 chips without compromising performance.

This is the second chip designed by the Tesla team internally after the FSD chip found in the FSD computer hardware 3 in Tesla cars. “This chip is like GPU-level compute with a CPU level flexibility and twice the network chip-level IO bandwidth,” says Venkataramanan.

Tesla then created what it refers to as “training tiles” to contain the chips in its computer systems and add the interface, power, with better thermal management. Each tile is made up of 25 D1 processors in an integrated multi-chip module, delivering 9 PFlops training tiles with 36TB per second of bandwidth in a less than 1 cubic foot format. 

Tesla intends to construct the first Dojo supercomputer by forming a computing cluster out of those training tiles. Dojo will be built by stacking two trays of six tiles in a single cabinet for a total of 100 petaflops of compute per cabinet, according to the firm. When finished, the business will have a single ‘Exapod’ capable of 1.1 exaflops of AI computation via 10 connected cabinets; the system will contain 120 tiles, 3,000 D1 chips, and over one million nodes. 

2. Boosting Autonomy Using FSD

Dojo, according to several presenters at the Tesla AI Day, will not only be a technology for Tesla’s “Full Self-Driving” (FSD) system but also an amazing advanced driver support system that is not yet completely self-driving or autonomous. 

FSD is a set of add-on services for Tesla’s Autopilot, a driver-assistance system that can be availed by spending as much as $10,000 for the Full Self-Driving package. The FSD moves around, senses its surroundings, and acts intelligently and autonomously depending on what it observes utilizing computer vision. This is accomplished by using data from eight cameras strategically positioned around the automobile to generate a 3D picture of its surroundings using neural networks. As a result, the vehicle is able to make precise decisions and responses.

The Tesla FSD system must be trained for all possible scenarios in the actual world in order for it to operate properly and allow the car to take action depending on what it sees. In 2017, Tesla launched FSD hardware version 2.5, and in 2019, FSD hardware version 3.0 was released. Tesla’s hardware 3.0 computer is now standard on all of its vehicles, including the newly built Model S. 

Read More: US Senators urge FTC to probe Tesla’s Self-Driving Claims

Tesla also intends to develop fundamental algorithms for driving the automobile, which would “create a high-fidelity picture of the world and design routes in that space.” This will help the neural network to forecast while driving using video camera feeds.

Musk stated, “This is not intended to be limited to Tesla automobiles.” “Those of you who have seen the complete self-driving beta can appreciate the Tesla neural net’s rate of learning to drive. And while this is a specific application of AI, I believe there will be other applications in the future that make sense.” Earlier, prior to the Tesla AI day, the company had launched Full Self-Driving Beta 9.2. The auto steer on city streets function has been implemented in the new FSD Beta, albeit it is poor and incomplete. Meanwhile, Tesla said FSD will add the ability to automatically steer on city streets later this year, which has been a long-awaited function.

FSD Beta 9.2 is actually not great IMO, according to Elon Musk’s recent tweets, but the Autopilot/AI team is rallying to improve as quickly as possible. The company is attempting to create a single stack that can handle both highway and city streets, but this will need extensive NN retraining.

3. Tesla Bot: Using Autonomous Vehicles as Inspiration

The Tesla AI Day event also contained a big surprise: the unveiling of the Tesla Bot, a humanoid robot that operates on the same AI as Tesla’s fleet of autonomous vehicles. As a part of Musk’s showmanship presentation, he hired an actor who did breakdance to a soundtrack of electronic dance music in a bodysuit, but no working version of the robot was shown.

The robot, nicknamed Optimus, is likely to be launched as a prototype next year. It would stand approximately 5ft 8in (1.7m) tall and weigh 125 pounds (56kg). The Tesla Bot’s head will be equipped with eight autopilot cameras, which are already used to detect the environment by Tesla’s vehicles. The bot will have a screen on its head area for displaying any information. Tesla’s FSD computer will be employed to power these cameras, as well as 40 electromechanical actuators distributed throughout the prototype robot.

Tesla Bot would be capable of performing jobs such as connecting bolts to automobiles with a wrench and picking up groceries from shops. It will have a carrying capacity of 45 pounds, a lifting capacity of 150 pounds, a weight of 125 pounds and can run with a top speed of 5 miles per hour. 

Musk believes that if a humanoid robot works and can perform repetitive jobs that only humans can now accomplish, it has the potential to revolutionize the global economy by lowering labor costs.

Watch the full event here:

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular