Skip to main content

Fear of the self-driving car: Is it warranted?

A Google self-driving car.
Image Credit: Google

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Would you be willing to climb into an autonomous car? You slip into the backseat, but there’s no driver to be found. The car speeds off into traffic, and you go into a panic.

That’s a question experts have started asking, now that so many companies — including Uber, Waymo (the Google sister company), and Cruise Automation (owned by GM) — have moved past the early testing stage and are making promises about when the technology will be ready.

Not much trust yet

In March, the AAA conducted a survey and found that only one out of four drivers would trust a self-driving car. Recently, Intel held a webinar and revealed new research in which it explained some of the factors that are leading to this lack of trust, and how these roadblocks should be addressed.

One is the simple fact that machines lack judgment. How do you tell the difference between a shopping cart and a baby carriage when they are both exactly the same size, the same color, and moving at the same speed? Drivers know by intuition and judgement; machines still have a hard time, and could overreact (say, swerving to cause a more serious accident).

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

There’s also a learning curve. Drivers today know the basics — turn signals, brakes, accelerator. They have gotten used to adaptive cruise control, the technology that adjusts your speed on the highway based on the speed of traffic. But what about self-driving car tech? It’s still new and untested. Intel’s research found that drivers are unclear about when a machine will talk and when it will listen; which gauges will be used and what they mean; and, perhaps most importantly, how much the driver has to pay attention in self-driving car mode.

Max Versace, the CEO at the deep learning neural networks platform Neurala, says self-driving cars are several years away. Drivers don’t know how to trust them yet because they have not seen the benefits or experienced what the technology has to offer. It’s impossible to trust something you’ve never seen or experienced, he says.

“Cars will need to be introduced and given enough time to provide genuine benefits to the driver,” he says. “Humans have a good track record of accepting practices and habits that have associated risks with benefits; however, before you can accept a risk, you need to experience a benefit, which then becomes the basis for acceptance. The benefits will help create trust.”

One of the major benefits, according to the Intel study, is that a computer making judgments about traffic and road conditions won’t second-guess a decision. Human drivers are well known for making decisions that go against all logic; computers only use logic.

Helping humans trust autonomous cars

So how will drivers learn to trust autonomous cars? Versace predicts — as do many other experts — that it will be a slow road ahead, but self-driving cars will finally arrive for everyday passengers by 2025. This will be true self-driving without a human at the wheel, not like recent prototypes from Uber and the Domino’s delivery car developed by Ford.

Companies like Waymo and Cruise will need to develop more standards, says Vitaly Ponomarev, the CEO and founder of WayRay, a company that makes AR systems for cars. For now, he says, the car companies are doing experimental research on their own terms, but it hasn’t started to filter out and influence city laws and statutes. He says there’s a need for regulations that cover all autonomous cars. Ultimately, he says, drivers want to see that the technology is more than just tested; they want to know it is regulated and controlled.

“Trust in autonomous vehicles will reach the first major milestone when the key car manufacturers complete testing on public roads under various environment conditions and will be able to share some positive results in terms of safe driving benchmarks,” Ponomarev says. “They will turn into a routine as the automotive industry shifts towards transportation-as-a-service and technology becomes the key competitive advantage for car manufacturers.”

Igal Raichelgauz, the cofounder and CEO of computer vision company Cortica, says the first milestone will occur when a production car doesn’t have a steering wheel.

This will push the envelope for drivers because trust, at that point, will be assumed. The issue right now is that humans are much more forgiving about mistakes when we see them in other humans rather than in a machine. If there isn’t a steering wheel, the driver will have to overcome any objections about losing control of the car. They will have to trust that the technology is capable of driving better than any human and powered by foolproof technology.

Of course, the idea of “foolproof” is open to interpretation. But all of the experts I talked to made a good point about trust. Human drivers need to see a self-driving car, test it, and experience it. Autonomous driving needs to be common enough that the technology is available on a car at your local dealership, not only on the roads of San Francisco or in Silicon Valley. Once we get to that point, when everyday drivers can test it, there will be much more momentum — and much more trust.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.