Canadian Underwriter

Why fully automated cars are a lot further away than you think

July 8, 2018   by Jason Contant

Print this page Share

Don’t hold your breath waiting for the first fully autonomous car to hit the streets anytime soon.

Car manufacturers have projected for years that we might have fully automated cars on the roads by 2018. But for all the hype that they bring, it may be years, if not decades, before self-driving systems are reliably able to avoid accidents, according to a blog published Tuesday in The Verge.

The million-dollar question is whether self-driving cars will keep getting better – like image search, voice recognition and other artificial intelligence “success stories” – or will they run into a “generalization” problem like chatbots (where some chatbots couldn’t make unique responses to questions)?

Generalization, author Russell Brandom explained in the blog Self-driving cars are headed toward an AI roadblock, can be difficult for conventional deep learning systems. Deep learning requires massive amounts of training data to work properly, incorporating nearly every scenario the algorithm will encounter.

That challenge has implications for self-driving vehicles, such as in the recent case in which Uber’s software misidentified a pedestrian.

Brandom said that the same algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of the wild cat – and even if it’s seen pictures of house cats and jaguars and knows ocelots are somewhere in between.

“For a long time, researchers thought they could improve generalization skills with the right algorithms, but recent research has shown that conventional deep learning is even worse at generalizing than we thought,” Brandom wrote. “One study found that conventional deep learning systems have a hard time even generalizing across different frames of a video, labelling the same polar bear as a baboon, mongoose or weasel depending on minor shifts in the background,” meaning that even small changes to pictures can completely change the system’s judgement.

In March, a self-driving Uber crash killed a woman pushing a bicycle after she emerged from an unauthorized crosswalk in Phoenix. A U.S. National Transportation Safety Board report found that Uber’s software misidentified the woman as an unknown object, then a vehicle, then finally a bicycle, updating its projections each time.

“Nearly every car accident involves some sort of unforeseen circumstance, and without the power to generalize, self-driving cars will have to confront each of these scenarios as if for the first time,” Brandom wrote.

One study by the Rand Corporation estimated that self-driving cars would have to drive 275 million miles without a fatality to prove they were as safe as human driver. The first death linked to a Tesla autopilot system came roughly 130 million miles into the project, less than half way to the mark.

Print this page Share

4 Comments » for Why fully automated cars are a lot further away than you think
  1. Martin Winlow says:

    What a pile of twaddle. An autonomous driving system doesn’t need to know what an object actually is in front of it any more than a human driver does – just that it needs to avoid hitting it as best it can. The thing people just don’t get about autonomous driving systems is that they will never be 100% perfect but they only have to be a bit better than humans (who have a *host* of issues when it comes to avoiding crashing – egos and distraction being but 2 of the main problem-areas – ones that computers simply don’t suffer from) for the whole idea to save lives and vast amounts of money. You will never be able to stop animals or humans from jumping out in front of moving traffic, deliberately or accidentally, so get over it!

  2. Jean-Marc Blanchette says:

    You are forgetting the fact that UBER had a chunk of the whole system disabled before that collision. The media seems to ignore this so conveniently.

  3. mark selvidge says:

    I have a medical condition preventing me from getting my drivers license and I have been anxiously awaiting fully autonomous cars. Find it disheartening to hear that fully autonomous cars are years in the future.

  4. Thomas says:

    Actually, the car in Uber’s case didn’t identify the situation incorrectly. If you have been following the case closely, Uber actually disabled the feature that would cause the car to make an emergency brake. They have stated that if the feature had not been disabled by Uber, the car would have responded and stopped (it had plenty of time to do so.)

Have your say:

Your email address will not be published. Required fields are marked *