May 1, 2016 by Greg Meckbach, Associate Editor
Suppose that, because the name Bruce Springsteen appears nine times in this editorial (twice in the lede), an Internet search engine misidentifies Bruce Springsteen, rather than self-driving cars, as a key topic.
Search engines are now pertinent to traffic safety because Google Inc. is advocating for self-driving cars without steering wheels or pedals. Ontario is one North American jurisdiction, with an approval process for autonomous cars, that requires the ability for human drivers to take over.
So if Bruce Springsteen, for example, got tired of his pink Cadillac and decided to ride in the back seat of a trial autonomous vehicle, he would have to have a chauffeur in the front who could take over.
A safe human driver is someone who not only is fit to drive, but is paying attention and has sound judgment – similar to a reader who would be able to figure out that Bruce Springsteen is not the main topic of this editorial.
This is not to suggest that search engines will categorize this as an article about Bruce Springsteen. A search engine may be programmed in such a way that it detects that Bruce Springsteen was just chosen as an example, or may even ignore the term Bruce Springsteen in this context.
However, it would not be unreasonable to anticipate that a well-programmed search engine, which operates exactly in the manner intended by its programmers, could mis-identify Bruce Springsteen as a key topic of this editorial.
By the same token, a human who is reading this may have no idea what this article is about, especially if that human is drowsy, intoxicated or distracted because he or she is using his or her mobile device instead of reading. In that case, a computer program may have a better chance of identifying the topic than a human not actually looking at it.
This is why self-driving cars are statistically safer than humans. It is not because cars are inherently suicide machines or because software has better judgment than humans. It is because too many human drivers are distracted, falling asleep, impaired, reckless or incompetent.
Computer programs simply obey commands. They are not born to run red lights, break the speed limit, fall asleep, get drunk or play Tetris when they are supposed to be paying attention to the road.
But are self-driving cars inherently safer than human drivers?
“To verify self-driving cars are as safe as human drivers, 275 million miles must be driven fatality-free,” said Missy Cummings, director of the Humans and Autonomy Lab at Duke University, during a hearing March 15 before the United States Senate Committee on Commerce, Science and Transportation.
“Given self-driving cars’ heavy dependence on probabalistic reasoning and the sheer complexity of the driving domain, to paraphrase [former U.S. defense secretary] Donald Rumsfeld, there are many unknown unknowns that we will encounter with these systems.”
During that hearing, Google self-driving car director Chris Urmson testified that 94% of accidents in the U.S. are due to human error. However, the fact that good software can drive more safely than some human drivers does not make computers inherently safer than human drivers.
What is the worst thing that would happen if a computer program mistakenly identified this editorial as an article about The Boss? Certainly no one would get injured or die.
But what is the worst thing that would happen if an autonomous vehicle – lacking the judgment that a human driver ought to have – is presented with a situation unforeseen by the programmer?
“These systems will not be ready for fielding until we move away from superficial demonstrations to principled, evidence-based test and evaluations, including testing human/autonomous system interactions and sensor and system vulnerabilities in environmental extremes,” Cummings contends.
If this editorial does not appear in search results for Bruce Springsteen, that may bode well for the safety of self-driving cars – assuming that the software driving the cars is at least as good as web search algorithms.