Future Tense

Analog Regulators Can’t Keep Up With the Digital Age

To regulate new technologies like self-driving cars, we need new policymaking tools.

An Uber self-driving car drives down Fifth Street in San Francisco.
An Uber self-driving car drives down Fifth Street in March 2017 in San Francisco. Justin Sullivan/Getty Images

Last week, a pedestrian was killed by one of Uber’s self-driving cars in Arizona. The Grand Canyon State has an incredibly lax regulatory oversight of autonomous vehicles as it works to attract Silicon Valley companies. But almost every state thinking about regulating autonomous vehicles is doing so incorrectly. They are bringing a 19th-century mindset to a 21st-century problem.

In the accident in Arizona, there was still a human safety driver being the wheel, though he appears not to have had his eyes on the road. A few weeks ago, California changed its rules to allow driverless vehicles on its roads. Almost immediately, Uber launched a fully autonomous vehicles test for its employees. (Uber has since halted all of its testing.)

These regulatory changes have happened in states—and are in-flight in the federal government—in the classic way: Regulators consider a rule, discuss that rule within government, look at data and information from experts and companies, open the rule for public comment, etc. Although every state’s rule-making and administrative law process is different, they all share one thing in common: The result is a set of legalese decided on and governed through a human process.

The human driving test is a result of this exact type of rule-making. Ultimately, though, it is a proxy for road skill not an exhaustive test of it. We’ve all agreed that being able to navigate cones is a strong indicator that a particular human will be up to the task of driving, it’s not a literal verification of everything that human will encounter. The nature of the test is also intertwined with the human-ness of the test taker: You can’t copy-paste the proxy to another type of intelligence. Yet that is exactly what we’ve been doing until now: allowing humans to make a subjective determination around artificial intelligence after extrapolating from a brief period of observation.

But it doesn’t have to be that way. With robot drivers, we can make the test much closer to a total verification, and there isn’t a reason not to. Robot-makers talk in terms of how many miles or hours have been driven by their computers. But ensuring safety is not merely about the number of miles driven. It’s about the number of unusual situations the vehicles can handle: a pedestrian with a bike jumping out in front of the car in the middle of the street (like in Arizona), a group of drunk college students stopping in the middle of the crosswalk, merging into traffic during rush hour where three lanes come together, passing through construction sites with stop/slow signs held by a person, or negotiating a one-lane bridge with an oncoming car.

To deal with the amount of testing that needs to be done, we need robot testers and robot regulators to serve as a check on robot drivers. That means building and deploying artificial intelligence as sophisticated, dynamic, and responsive as that which they are testing to ensure the public good. You can’t regulate a hugely complex computer system with a clipboard and a pen. It will take intelligent technology to regulate intelligent technology.

That means policymakers need their own software—their own robots—to generate scenarios to test artificial drivers, as well as define the acceptable range of responses, allowing us to simulate complex driving scenarios, test specific functions, and create a statistical definition of safety across a huge range of contexts.

We need to create what Amitai and Oren Etzioni have called “oversight [software] programs” to “monitor, audit, and hold operational AI programs accountable.” The Etzionis’ proposed programs are for applying shared ethical frameworks to judge artificial intelligence algorithms, but the idea can be applied to regulation: Governments should create oversight software to judge the ethical and safety implications of robot drivers. To begin, regulators will need to develop industrywide methods to test the billions or trillions of possible situations that could arise for an autonomous vehicle. Although some researchers—most notably Iyad Rahwan at MIT Media Lab—have done research on what types of moral decisions autonomous vehicles should make, no one has yet developed third-party testing for what decisions robot drivers are making.

Regulators could start by defining a common language for robot regulator and robot drivers to talk. Today there is no agreement on such standards. In fact, in 2017 the Trump administration quietly killed plans to establish so-called vehicle-to-vehicle communication protocols for cars to “talk” with one another to avoid collisions. Vehicle-to-vehicle communications are a necessary subset of the type of information that a robot regulator would need to get from a robot driver. Critically, all of this rule-making and testing must be done in public, in an open system that allows errors to be easily spotted by companies, industry experts, and concerned citizens.

Although many remember healthcare.gov as an example of how government can fail at technology, what fewer people know is that it was ultimately saved by a group of software engineers who brought their talent to the government from places like Google. From there, large (very competent) internal software groups have been created at the United States Digital Services and 18F, so much so that they are frequently attacked by big technology vendors for doing their jobs too well. However, these groups focus on the work of rebuilding the websites and systems that citizens interact with to get services and find out information. This work is critical but does little to bridge the gap between software engineering and regulators to protect the public.

I recently served as the inaugural chief data officer of California, where I worked to bring this new type of regulatory thinking into government. But ultimately, I was a technologist supporting regulators in their data and algorithmic needs. What we need is something much different: technologists as regulators with a new type of rule-making and a new type of work product. We need algorithms that judge safety in real time across countless scenarios not rules applied by human lawyers during an audit.

To make this happen for autonomous vehicles, state legislatures and Congress need to authorize this new type of regulation and then create and fund new offices that mix regulators and software engineers together inside departments of transportation.

The challenge with such a testing system is not in the technology but with the mindset of regulators and the public. We live in a digital age but have analog regulators. We use lawyers to write rules, but they usually don’t write code or understand software systems. That has to change. Now is the time to start hiring regulators who can code—and who will create and manage the rules of the road for autonomous vehicles.