I missed this thought.
They’ve rolled out a new architecture which uses a very different sensor strategy,” says Tim Dawkins, an autonomous car specialist at automotive tech research company, SBD. “They needed to spend a little time building up their base data before they were able to release the same level of functionally as they had with hardware version 1.0.”
The first iteration of Autopilot relied on a single camera made by Israeli supplier Mobileye. The new setup uses eight cameras, dotted all around the car, feeding an in-house Tesla Vision system. The 12 ultrasonic sensors have been upgraded, the radar is improved. A new on-board Nvidia computer is 40 times more powerful than its predecessor, and runs the necessary artificial intelligence software.
and here is the money quote:
Where a conventional automaker might do that training with qualified drivers in controlled environments, or on private tracks, Tesla used its customers. It pushed fresh software to 1,000 cars on December 31, then to everybody in early January. That code ran in what Tesla calls Shadow Mode, collecting data and comparing the human driver’s actions to what the computer would have done. That fleet learning is Tesla’s advantage when it comes to educating and updating its AI computers.
“This is the uniquely Tesla approach, in the way that they have their consumers build up that rich data set, from which they can train up their AI,” says Dawkins.
Of course. Why didn’t I think of that. I mean, I didn’t have to go do it, but just consider the idea. The car is full of sensors and why not let the car shadow read drivers for a season and learn how real drivers deal with real situations that appear on the road?
Better than rules based. Learn from those who already know how to drive.
You need cars with the sensors. You need a connected car back to Tesla/cloud. And you need lots of drivers driving through all different conditions. Rain, snow, fog, fast, slow, city, suburb, rural, interstate, toll roads, bridges, tunnels, parking garages, etc.