On June 20th, 2021, at the CVPR’ 21 Workshop on Autonomous Driving, Tesla unveiled its brand new supercomputer, relying entirely on the Computer Vision approach for autopilots leaving behind the radar and lidar sensors. Earlier this year, on April 10th, Elon Musk made a popular tweet: “When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion”. From that time, it has been evident that Tesla is transforming solely into highly optical camera-based systems to produce safe autonomous vehicles.
However, the newly launched supercomputer is the predecessor of Tesla’s existing supercomputer “Dojo,” coming with 10 petabytes of “hot tier” NVME storage, 1.8 EFLOps, and 720 nodes of 8x A100 80GB, as stated by Tesla’s head of AI Andrej Karpathy during the CVPR’ 21 events. Karpathy also claims that this might be the fifth most potent supercomputer after Sunway TaihuLight, but later conceded that his team has not yet reached the benchmark for the TOP500 SuperComputer Ranking.
Read more: Tesla Sues Rivian And Accuses Of Theft
The Tesla newcomer possesses eight high-defined cameras capturing 36 frames per second from its surroundings, with the help of a $10,000 FSD package that detects the highway ramps, lanes, and responds to traffic signals. Furthermore, Tesla’s artificial intelligence team employs supervised machine learning approaches for training the neural network on large, clean, diverse datasets created using Tesla’s Auto-labelling feature. Later deploying the model on FSD (Fully-Self Driving) computer, which is the Tesla in-house developers chip comprising 12 CPUs, a GPU with (600 GFlops) and two customized NPUs (Neural Processing Units).
Utilizing all the emerging technologies like Computer Vision, GPU-clustered SuperComputers, and NPUs, it is believed that the new Tesla AutoPilot will provide a safe driving environment by avoiding accidents.