There are two long-standing schools for the role of Artificial Intelligence. The first is Autonomy, where AI devices and systems simply replace us. The second is Augmented, where AI serves us as a new toolset: I favor the toolset approach and am writing Augmenting People: Why AI Should Back Us Up, Not Push Us Out to explain why you should be concerned as well.
Ultimately, decisions made today between AI augmentation and autonomy will shape the future of the human race. Over the next 10-15 years, this choice will restructure our relationships with machines, each other, the planet and the universe. Right now some of the smartest technologists in the world, working in some of the best established and most promising tech companies in the world, are inventing ways for AI to do everything from nanobots that will swim through our blood zapping cancer cells to cars that are so autonomous that they are outfitted without steering wheels and with fold-out beds for a good snooze. I’m sure we will still be able to get car loans through useful price comparison websites similar to Money Expert though.
You’ve already read about the cars and how great they will be. In the autonomy/augmentation debate, it seems the decision has already been made, robotic cars will prevail. The government, insurance carriers (click to find car insurance near me), and even car-makers are just about unanimous that these vehicles are safer and far less polluting. A younger generation seems to prefer self-driving to their parents’ preference for being behind the wheel and in control.
Just about every automaker is moving toward what is called Level 6 Autonomy, which means the driving machines do it all while humans do whatever they please. Volvo is even designing a sleeping car because passengers need to do nothing but get inside the car and tell it a destination.
There are mountains of data arguing the case for automobile autonomy. Millions of miles have been logged by traditional and new automakers with very few mishaps and only two fatal accidents—both blamed on distracted humans who made fatal errors.
Cases like this sound pretty compelling. Data is the protoplasm that makes AI seem, well, intelligent. The stats argue that if we remove humans from the loop, lives will be saved, pollutions will be reduced, existing roadways can be retrofitted to enable more cars to use existing infrastructure at higher speeds and travel time will be reduced.
Many thinkers, including those at Ford Motors, see the road ahead as covered with robotic cars that no one owns, who will continuously be in transit to users who will wait less than a minute to be picked up. All that space in cities used today for garages and parking lots can be re-used as affordable housing since the cars will always be in motion.
I was pretty convinced that automobile autonomy was not only beneficial but inevitable until about a month ago when my hiking buddy David Strehlow pointed me to an article in an automotive magazine profiling Captain Chesley “Sully” Sullenberger, the commercial pilot who on March 3, 2010, ignored the advice of computers and landed his plane, US Air 1549, on the Hudson River. In terms of American culture, the incident seems to be the last human experience considered to be a miracle.
You may recall that shortly after takeoff, Sully’s plane struck a swarm of birds that got sucked into both engines, rendering them inoperable. The tower advised him to return to La Guardia where he had just taken off, or perhaps to a nearby New Jersey local airport.
There was an urgent need to assess the situation, then decide to act fast: Sully picked the Hudson River, but the computers advised returning to LaGuardia where the plane had taken off. Sully, as the human in charge, chose the river landing and saved 155 lives. The computers would later show that the airport landing would likely have been fatal to all.
Sully, was the first human decision maker to face such a situation, and he decided quickly based on his many years of human experience. The computers were smart, but everything they knew they had to be taught, and no one could teach them something that had never happened before and was not anticipated to happen to Flight 1549.
As I hiked along with David, I asked him what all this had to do with my book on AI and the future of work. “Everything,” David told me. “Humans need to remain in the loop. Computers do what they are taught to do with great speed and accuracy. But they cannot handle the unknown unknowns we encounter when we drive.”
Where Robots Dare to Tread
David and I were hiking on a popular trail off of Skyline Boulevard, a scenic, wooded, gently curved road that could be designated Silicon Valley’s Western Border. It matters to this story, because of an ironic touch. Five years earlier, not far from where we had parked, I had witnessed an odd-looking vehicle in front of me on Skyline. It looked like it had a home air conditioning unit fixed to the roof and it was driving at precisely the speed limit, to the annoyance of me and the lengthening line of drivers who knew that even police cars went at least five miles over the speed limit in that area.
This was my first encounter with a self-driving car. They have improved greatly since then, perhaps at a greater speed than most people expected they would. The car I had seen was a prototype and it was on the road where learning about humans and driving would require making fewer choices than it would encounter say in Times Square or the pandemonium of the Piazza del Colosseo in Rome. Picture what it would take to self-drive on the highways of India where goats, cows and elephants have been known to meander, or on the anarchistic streets of Beijing.
Even the smartest level cars could not safely navigate these scenarios because they are magnets of unknown unknowns. When I was driving in Rome and encountered this nightmare traffic circle, I was nearly paralyzed. I could not even figure out what direction I was supposed to head in. Then, I did a very human thing: I saw a guy in front of me who looked like he knew what he was doing, so I maneuvered to get directly behind him, then I stayed close to his back bumper all the way through the circle. As luck would have it, he was heading in the same direction as I was. I was pretty scared of having an accident so I probably should have bought a dash cam from Black box my car to record my journey.
My point is that robot driving is fine for limited or controlled conditions. But drivers know that driving conditions really are not all that controllable. The fact is that robots, chatbots, vehicles, vessels and drones continue to lack common sense and will continue to lack it for some time to come. AI responds faster and more accurately to only the known unknowns.
This lack of common sense explains why the test car I followed on Skyline went the speed limit when everyone else knows it is safe to exceed it a little bit. It is why we all make left turns at intersections by cutting in front of each other when the law says we are supposed to cut behind the other car. It is why the locals have no problem at the Piazza del Colosseo and we outsiders have the common sense to do what the locals do. Chances are good that your robot car will simply just stop, stupefied by the madness of it. It is why Waymo neighbors get pissed off at Waymo robot cars that take forever to get through a tee-intersection or come to a full stop while accessing a highway from an onramp.
This brings us back to Sully and where I got the term Unknown Unknowns.
In 2017, Sully joined a seven-person panel entrusted by the government to study the feasibility of autonomous cars. According to The Drive magazine, the remainder of the panel was comprised of representatives of companies developing self-driving cars.
While six other panelists extolled the many authentic virtues of self-driving cars, Sully did not. He did not argue about the advantages that were being stated but he worried about the unknown unknowns that you don’t discover on Skyline Highway, but may encounter on the main highways from Silicon Valley into the city. It may explain that when the child’s ball bounces out in the street from behind a parked car, that it may be safer to have a human driver whose experience gets her or him to hit the breaks before the pursuing child is actually seen.
It strikes me how many millions of sanitized and protected test driving miles are being logged, where the likelihood of unknown unknowns is being reduced. A couple of years back, I got to be a passenger in a test Mercedes that amazed me as it smoothly careened at 55 mph and then gracefully made a left turn: But we were in a desert. Had we missed the turn, we would probably have landed softly and uninjured in the sand.
It brings me to wonder, how many of the millions and million of test miles in robot cars, have encountered the sorts of surprises that you may have experienced on a crazy rotary in Rome or Beijing? I once drove a Toyota across South Dakota. When I encountered a buffalo in the road, I hit the brakes and found myself making direct eye contact as the massive beast lowered its head. By the time it began to charge my Toyota head-on, I was already in reverse, going backwards as fast as I could maneuver until the buffalo tired itself out and wandered off in another direction.
What would a robot car have done in such an unknown unknown? It would probably become completely buffaloed and that could be painful for any passengers.
Designing Humans Into the Loop
Sully’s whole point is not to eliminate self-driving cars, but that for a long time into the future, humans need to be factored into the loop. We must be part of the equation. Computers do so many things better than we do and yes they are probably safer and certainly environmentally more friendly.
I agree with him that we must be decision makers, not observers. If you or I were to be sitting in a co-pilot car seat with no one behind the wheel–if the wheel is still there–we would get bored easily and become distracted from our assigned task of second-guessing a supposedly flawless machine. That sort of distraction was a factor in the two deaths attributed to robot cars so far. The problem with unanticipated hazards is that they are unanticipated.
My book is about AI and job loss. Personally, I hope the millions of people whose job it is to drive all sorts of vehicles, will be assisted by AI, not replaced by it. This also would go for a significant percentage of the billions of people whose cars augment their work.
It seems to me, that we are approaching the relationship between cars and people in reverse order. We need to keep human drivers factored in the loop. There are many other places where we humans will be needed for a long, long time to come.
++++Subscribe to ItSeemstoMe