In March this year an Uber-owned Volvo XC90 operating in autonomous mode – but with a human minder in the driver’s seat – ran over and killed a pedestrian in Tempe, part of Phoenix, Arizona.
Until then the route to self-driving cars integrating onto our roads seemed to be relatively smooth. But with this tragic event autonomous vehicles are hitting the brakes.
Uber took autonomous vehicles off the road in all four cities where it was testing them, Pittsburgh, San Francisco, Phoenix and Toronto. In Arizona the governor suspended the company’s autonomous driving privileges. About 10 days after the incident, the victim’s family lawyer told Associated Press the “matter had been settled”.
Self driving car on a road. Artificial intelligence of vehicle.
From Uber to heavy trucks and private passenger cars, research and experimentation with autonomous driving (AD) is making news, some good, and some, like the recent death in Arizona, sobering. This incident raises numerous questions about the technology’s readiness to safely interact with its surroundings, and highlights many issues that will affect automobile insurance in the future.
But even without the Uber crash to prompt the conversation, the rapid evolution of automated driving technologies promises to alter government regulations and industry practices, while the potential obsolescence of human drivers will also change the legal landscape for consumers, car manufacturers, the legal profession, insurance companies and others.
With this in mind, it’s time for a look at why AD matters, available technologies on the road today, how autonomous and human-driven cars will interact, and how this evolution might affect the insurance industry.
The case for automation
It’s dangerous on the road. According to an Insurance Institute of Canada (IIC) report (Automated Vehicles: Implications for the Insurance Industry in Canada), in the last 30 years 6.7 million Canadians suffered injuries in collisions, while more than 94,000 have died on our roads. The Conference Board of Canada estimated that in 2011 the annual cost of fatalities and injuries from motor vehicle accidents was $46.7 billion.
What’s at the root of this terrible toll in suffering and cost? Driver error. Human mistakes have been found to be responsible in upwards of 90 percent of motor vehicle crashes.
Self-driving cars promise to alleviate the carnage, reduce costs and confer many other benefits as well. If the unpredictable, unreliable, potentially intoxicated and distractible human is taken out of the driving equation, and sleepless, smart machines take over, that 90 percent could be all but eliminated, AD proponents say.
On the cost side the Conference Board notes that, conservatively, the 2011 costs could have been reduced by $37.4 billion if 80 percent of crashes had been eliminated. Clearly, it’s not a simple calculus, with factors such as the changing value of repairs to increasingly sophisticated autonomous vehicles moving the goal posts around, but the underlying logic holds true: anything we can do to reduce car crashes will save lives and unnecessary costs.
Other benefits include reclaiming productivity and wellbeing lost in long commutes at the wheel. Public health studies the world over show that the stress of a congested commute shortens lifespans and reduces quality of life.
The 2017 Canadian census found that while the average Canadian driver spends 24 minutes on the road to work each day, there were more than 850,000 people working in 2016 who spent at least one hour getting to work in a private vehicle each day. In Vancouver, Montréal and Toronto the average one-way commute time is around 30 minutes. Self-driving cars could alleviate not only the stress and tedium of driving, but would also help minimize congestion by being good drivers – staying in the correct lane, keeping a safe distance and preventing the accordion effect from developing on busy rush-hour roads.
Fully autonomous vehicles also promise greater freedom and mobility for those who cannot drive themselves, ending their reliance on family, public transit or taxi services to get around.
A few caveats
Along with these benefits analysts also see some roadblocks to increasing automation. A recent University of Victoria study written by Todd Litman notes that with increasing sophistication, cars will become more expensive, and given their longevity it will take years before even currently existing technologies achieve a high adoption rate.
As well, the price of error will be much higher, as the Uber crash bears out. “System failures by cameras, telephones and the Internet can be frustrating, but are seldom fatal,” he notes. “System failures by motor vehicles can be frustrating and deadly to occupants and other road users.”
The risk is not just from failures, either. Litman notes that cyber security for cars will become a major concern, with the danger that a car might be taken over by a malicious hacker. Either hacking or system failure may result in collisions, he says, meaning that the anticipated reductions in crash costs may not materialize as many hope. Hacking also poses personal privacy and data breach concerns, Litman points out.
From a societal and economic perspective there may be larger costs resulting from lost employment in the transport sector, and higher infrastructure costs due to the necessity of sensors and high tech equipment being built into our public roads.
Who’s driving now?
Already, many of us are driving vehicles with semi-autonomous capabilities. If you have advanced collision prevention technology aboard your car or truck, or even adaptive cruise control, you are piloting a partly autonomous vehicle.
The Society of Automotive Engineers (SAE) came up with a classification system for the degree of autonomy in vehicles (see chart next page). It groups cars into five levels based on the amount of human versus system control over driving tasks. At present most cars and light trucks with automation technology on the road are at level one or two. Uber and other self-driving cars currently being tested are at level-three and level-four automation, with a human driver still expected to take over some tasks when needed.
Level one and two technologies are becoming increasingly commonplace, and have been part of the Insurance Institute for Highway Safety’s rating program since 2013, as well as earning discounts from many insurers.
These include numerous aids such as:
Adaptive cruise control keeps a constant distance to the car in front, and slows or possibly stops the vehicle when the car in front is moving too slowly.
Forward collision prevention uses sensors to measure distance and relative speed and warns the driver if a collision is imminent. While the warning takes place, the system warms up the brakes and if the driver is too slow to respond some systems will engage the brakes automatically. Some systems also prepare the car for a collision by tightening seatbelts and closing windows to minimize the effects of a crash.
Rear backover prevention stops the car in reverse before it hits something.
Lane departure warning alerts the driver they are wandering outside the lines and in some cases will return the car to the lane.
Blind spot detection notifies the driver there is a vehicle that cannot be seen and in some cases prevents the vehicle from changing lanes into the other car.
With each successive model year, manufacturers are adding technology, bumping previous years’ innovations up from expensive options to standard equipment, and pushing cars further to the autonomous end of the SAE scale.
Given that fully autonomous driving is not likely for at least another 30 to 40 years – the Institute of Electrical and Electronics Engineers believes it will take until at least 2040 before 75 percent of vehicles are autonomous – there will be a long period in which these new technologies will interact on the roads with human-driven vehicles as well as pedestrians, cyclists and others. And it will be a challenge.
Thanks to the complexity of the unpredictable street and highway environment, with multiple moving objects, signs, lines, guardrails, traffic signals, pedestrians, animals, weather, lighting conditions, even cultural differences in driving styles, and so on, the amount of computing power required for AD is astronomical. The US General Accounting Office explained in a 2016 report that a Boeing 787 Dreamliner aircraft, one of the most sophisticated in the air today, requires about 6.5 million lines of code to operate. A modern luxury car, on the other hand, needs 100 million lines to manage all its systems and options, and that’s before autonomous driving capability is introduced.
It will require extremely complex algorithms to allow autonomous cars to make decisions the way humans do. Sure, computers have quicker reaction times and can calculate myriad probabilities in a split second, but it will take sophisticated machine learning for cars to come to the level of decision making that an average human demonstrates.
One area where this is often cited as a challenge is in the ethics of decision making. If the car has no choice but to hit something, will it be able to distinguish between a dog and a toddler and make the right choice?
“There’s no value-free way to determine what the autonomous car should do,” say Brett Frischmann and Evan Selinger, both professors and co-authors of Re-Engineering Humanity, in a recent op-ed on Vice.com. “The choices…shouldn’t be seen as a computational problem that can be ‘solved’ by big data, sophisticated algorithms, machine learning, or any form of artificial intelligence. These tools can help evaluate and execute options, but ultimately, someone – some human beings – must choose and have their values baked into the software.”
As manufacturers work to develop ever-more sophisticated self-driving capabilities that move responsibility from the human driver to the designers and programmers building the machines, they will have to figure out how to take this into account. “Engineers will embed the ethics in decision-making algorithms and code,” say Frischmann and Selinger, but should they be the ones making those decisions? It’s a question industry and society will have to address, and before the arbitrary ethics become entrenched in the technology, the authors warn.
The IIC report highlights the challenge of figuring out who is responsible for collisions. “Personal liability for most collisions will begin to shift to include a mix of personal and product liability,” the report notes, as cars increasingly take on the chore – and risks – of driving. Insurers will need to design policies to cover the human driver, the driver of the semi-autonomous vehicle and the fully automated car, as all three share the road.
Mercedes-Benz, one automaker with sophisticated semi-autonomous technology on the road in many of its vehicles and which is testing its E-Class AD cars in Nevada, believes that most currently existing liability laws will serve the changing model. “The legal framework that applies to the current driver assistance systems also forms a reasonable basis for the next stages of development. However, changes need to be made to the technical regulations as well, in view of the future of autonomous driving,” says Renata Jungo Brüngger, member of the board of management of Daimler AG, responsible for integrity and legal affairs.
With regard to current, partially automated systems, under the laws of many countries the driver remains responsible. Although safety systems support the driver, the driver must maintain control of the vehicle and intervene in case of an emergency. If the driver causes an accident, he or she is liable for the resulting damage, along with the owner of the vehicle. Manufacturers are responsible for damage flowing from product defects.
“This shared combination of liability among the driver, owner and manufacturer offers a balanced distribution of risks, protects victims and has proven itself in practice,” says Jungo Brüngger.
The road ahead
While Berkshire Hathaway’s Warren Buffet believes self-driving cars will limit the insurance business, as he told the company’s annual meeting in a year ago, a recent report by Accenture (Insuring Autonomous Vehicles) takes the opposite tack. It suggests that while individual insurance premiums will fall, they will be offset by the need for new kinds of car insurance, related to the cyber risks noted above, including new kinds of liability insurance and cyber-infrastructure coverage for the computing systems that keep the cars running. The consulting firm believes new premiums for these coverages could be worth US$15 billion by 2025.
As well, while the frequency of collision claims may decline if AD lives up to its safety promise, the severity and expense of collision claims is expected to climb. With sophisticated sensors and technology aboard, self-driving cars will be much more expensive to repair, says Tom Super, director of the property and casualty insurance practice at J.D. Power.
The consensus is that auto insurance is not going away, but as with so many facets of the industry, it will have to change to accommodate the new kind of driving. “What we’re going to see for the next 20 years is a messy, complex and uneven transition that raises a number of questions,” said IBC’s Don Forgeron in a speech on the future of insurance this April. “How do we begin to adapt to a transportation system in which the concept of liability is going to be transformed?…How [do we] price liability that is going to shift from the driver to the manufacturer?”
Likewise, in its AD report IIC raised numerous questions that will need answers before there can be clarity around auto insurance in this evolving environment. How will insurers get information about what happened in a collision? Will they have access to onboard data? How will they collect from automakers when the technology is found to be at fault? How will costs be shared when both driver error and system failure contribute to a crash?
While these and many other questions remain to be resolved in the continuing journey to autonomous driving, one thing is certain: There’s no rush to make decisions. While self-driving technology is advancing rapidly, it is still in its formative years, and not at all ready to take the wheel. It’s barely got it a learner’s permit.
The Uber Volvo crash is proof enough that there is much to be learned and put into practice before we can sit back and let our car play chauffeur. Even Raj Rajkumar, head of the GM-Carnegie Mellon University’s AD lab and one of the world’s preeminent AD researchers said after the Uber crash: “Companies need to take a deep breath. The technology’s not there yet.”