A very interesting write-up of a talk given on superintelligence. The article is quite good in that it walks smartly through the reasoning that there is concern about artificial intelligence does have the real possibility to be a problem for mankind in the future. This is a well-reasoned approach on how to think about why this is possible. If you accept the stated ideas one-by-one, then the conclusion is a real possibility. Most interesting.
Great article in the New York Times about the challenges with autonomous vehicles. I continued to be fascinated by solving this problem and the resulting impacts that will have on all of us. I’ve written about it here and here and other places.
This article highlights some of the very subtle but real challenges with having an automated system control all aspects of a complicated problem. It is these boundaries or edge kind of problems that are the hard problems to solve. Things on the edge. These are the issues and problems that will delay any automation moving into the mainstream. Having a car stay between two lines is not the hard problem.
And then in the decade after these problems are solved and these vehicles are the norm, we’ll all forget how to manually drive the car because we’ve turned that problem over to the machines. Just like pilots today who are so used to autopilots that when they have to take over in a complex situation, they may not be up to the task.
When I wrote earlier about autonomous vehicles and mentioned trucking and truckers I hadn’t even considered the idea that someone could move faster if they didn’t try to solve the problem of the ‘last mile’ and instead just focused on autonomous driving of trucks on the highways and interstates. Have the vehicle drive itself 95% of the way there, but then have a ‘real driver’ meet the truck for the last leg of the delivery. Use real drivers out and into cities and complicated areas.
See a write up about it here.
I missed this thought.
They’ve rolled out a new architecture which uses a very different sensor strategy,” says Tim Dawkins, an autonomous car specialist at automotive tech research company, SBD. “They needed to spend a little time building up their base data before they were able to release the same level of functionally as they had with hardware version 1.0.”
The first iteration of Autopilot relied on a single camera made by Israeli supplier Mobileye. The new setup uses eight cameras, dotted all around the car, feeding an in-house Tesla Vision system. The 12 ultrasonic sensors have been upgraded, the radar is improved. A new on-board Nvidia computer is 40 times more powerful than its predecessor, and runs the necessary artificial intelligence software.
and here is the money quote:
Where a conventional automaker might do that training with qualified drivers in controlled environments, or on private tracks, Tesla used its customers. It pushed fresh software to 1,000 cars on December 31, then to everybody in early January. That code ran in what Tesla calls Shadow Mode, collecting data and comparing the human driver’s actions to what the computer would have done. That fleet learning is Tesla’s advantage when it comes to educating and updating its AI computers.
“This is the uniquely Tesla approach, in the way that they have their consumers build up that rich data set, from which they can train up their AI,” says Dawkins.
Of course. Why didn’t I think of that. I mean, I didn’t have to go do it, but just consider the idea. The car is full of sensors and why not let the car shadow read drivers for a season and learn how real drivers deal with real situations that appear on the road?
Better than rules based. Learn from those who already know how to drive.
You need cars with the sensors. You need a connected car back to Tesla/cloud. And you need lots of drivers driving through all different conditions. Rain, snow, fog, fast, slow, city, suburb, rural, interstate, toll roads, bridges, tunnels, parking garages, etc.
I’ve been fascinated with autonomous vehicles and their coming and their impact on everything. I wrote a post last May about aspects of this and linked to a number of articles that I had read and found interesting at that time.
My daughter and son-in-law recently purchased a Tesla and I’ve had the chance to ride in and drive their vehicle. Of course the main point for the vehicle is that is is electric and does not have a gasoline engine at all. I think most people think of the electric vs gasoline difference as the main point with the Tesla.
However, the most interesting point to me is the aggressive and smart progress towards autonomy. The car has sensors around it to detect obstacles, the road, people, other cars, etc. Tesla has posted a video of what the car is ‘seeing’ as it moves down the road and it is fascinating and worth taking a look if you’ve not seen it.
The Verge wrote about this video with comments here.
The question I have is why is Tesla (and Google and maybe Apple) leading out on these capabilities and not Ford, Toyota or BMW? These later companies have had these platform (vehicles) for years and Tesla is relatively new.
This doesn’t seem like an innovator’s dilemma case. Electric vs. gasoline might be an innovator’s dilemma case where the incumbents are clinging to their success with gasoline powered engines. Autonomous driving is independent of electric or gasoline. Gasoline powered cars could have the same capabilities or variations of it.
A gasoline powered traditional vehicle could have all these sensors onboard and could be making aggressive steps to autonomy. We could have already had these sensors on board warning us of obstacles and helping the driver be safer. What we have are some vehicles with automatic emergency braking and in some cases, heads up displays if a deer is on the road ahead. Not much else.
It seems more to me like Tesla can just see a different future and is aggressively heading in that direction. It is leadership and vision.
This is going to be an amazing ride.