David Pogue Gets Car Hacking Dangerously Wrong

Pogue's error-filled column on automotive security reporting is not just wrong, it's dangerous.
David Pogue.
Britta Pedersen/DPA/Zuma Press

Writing about security means focusing on insecurity. Repeatedly telling readers all the ways in which they're safe isn't useful journalism. And yes, exposing a true lack of security causes fear. That fear, in some cases, leads to change that makes us all more secure.

So when Scientific American published David Pogue's column earlier this week accusing WIRED of "scare tactic journalism" in our recent reporting on the digital threats to automobile safety, we weren't just offended by the assorted errors that outnumbered any facts in the piece—some of which Scientific American editors have noted in a series of corrections. We also take issue with Pogue's fundamental argument: That reporting on the growing threat to vehicle cybersecurity isn't legitimate because "car hacking is nearly impossible."

"Nearly impossible" means possible. Researchers Charlie Miller and Chris Valasek demonstrated to us in July that they could remotely take control of a 2014 Jeep Cherokee over the Internet and perform tricks like disabling its accelerator on the highway—or at low speeds, even its brakes. Their research, and our story, prompted Fiat-Chrysler to issue an official recall of 1.4 million vehicles, send all affected customers a USB stick containing a software patch, and to work with Sprint to block the attack on the cellular network that connected the vulnerable vehicles to the Internet.

Before considering whether that qualifies as worthwhile journalism, let's first examine the points Pogue makes to attack that story and our other recent pieces on car hacking. Pogue raises and dismisses three examples of reporting on automotive research. In each case, he makes factual errors or fundamentally mischaracterizes the story. (WIRED has sent a list of these mistakes to Scientific American in a request for a retraction of Pogue's column.)

  • "In the case of the WIRED article, the Jeep belonged to the hackers. They had been working on its software for three years to make it hackable. That one Jeep." Yes, the Jeep belonged to the hackers, but Pogue's other two sentences are wrong: The hackers studied their Jeep for just over a year, not three, to develop their hacking technique. Most importantly, they never altered its software to make it more hackable. (The very title of their research paper is "The Remote Exploitation of an Unaltered Vehicle.") And of course, the attack didn't apply to "that one Jeep." It applied to 1.4 million vehicles, as shown in the recall Fiat-Chrysler announced three days after our story. These were the most egregious errors in Pogue's column, and Scientific American corrected them Wednesday.
  • "In February 60 Minutes ran a story about a similar experiment ... But would it have been as frightening if [the 60 Minutes reporter] had mentioned that this kind of hack requires a car with cellular Internet service, that it had taken a team of researchers years to make it work—and that by then the automaker had fixed the software to make such a hack impossible for vehicles on the road?" The demonstration for 60 Minutes, by researchers from Darpa and the University of California, San Diego, showed that the OnStar connection of a 2009 Chevrolet Impala could allow hackers to disable its brakes over the Internet. As WIRED has reported, the UCSD researchers along with others from the University of Washington told GM about this security vulnerability in 2010. It took GM nearly five years to fully fix the problem, which affected millions of vehicles. At the time of the 60 Minutes demonstration, a flaw in GM's attempted fix still allowed the hackers to gain access to the Impala. It's hard to imagine how Pogue believed the "impossible" demonstration had been performed for 60 Minutes' cameras otherwise.
  • In his third example, Pogue points to a WIRED story on a now-patched Tesla Model S hack that required physical access to the vehicle. "But wouldn't you see that if you were in the driver's seat?" he asks. Our story, however, spelled out in the first paragraph that the primary risk of the security flaw was that it would allow the car to be hot-wired—the most serious risk was theft, not an attack that would occur while "you were in the driver's seat." This is also the only car hacking example mentioned in Pogue's piece that required physical access to the targeted car, despite a now-deleted sentence stating that all known car hacks required such physical access.

Take out those erroneous examples and there's not much left to Pogue's argument. But he still goes on to list other problematic claims. First, he flatly states that "no hacker has ever taken remote control of a stranger's car. Not once." That's impossible for anyone to know, of course. But it's worth mentioning that the Miller and Valasek's technique could silently track vehicles as well as hijack them. That shows how car hacking could be used for surveillance as well as sabotage, and as Edward Snowden has taught us, many "theoretical" hacks already have been put to use by intelligence agencies.

X content

This content can also be viewed on the site it originates from.

Pogue then undercuts his own argument, admitting that "car security is serious." But he goes on to shift the burden of identifying and fixing these issues to the automakers themselves. "In other words, the industry's [Pogue's emphasis] concern isn't misplaced," he writes. This seems to mean that consumers should stop worrying and leave it to corporations to solve these problems. We've already seen how well that works: GM failed to patch a serious hackable vulnerability in millions of vehicles that it knew about for nearly half a decade. Despite that prominent case, Pogue makes another claim that "all three cases described here led to prompt software fixes by the carmakers."

Finally, Pogue assures readers that "none of those researchers would be able to repeat their demonstrations today." Does he really believe that hackers have now found all the hackable bugs? He even seems to miss the painfully obvious point that the specific bugs he mentions have been fixed because researchers and the press brought attention to them.

Pogue is right that readers shouldn’t panic about the possibility that their cars could be hacked. They should be aware of the real facts of existing research and understand that having their car attacked over the Internet now remains unlikely, yet may be possible.

But WIRED looks toward the future. Cars are increasingly connected to the Internet, increasingly automated, and their future cybersecurity is an issue consumers, automakers, and the government should understand. If it were left to thinking as misinformed and shortsighted as Pogue's, there is little doubt that the digital insecurity of vehicles would someday soon lead to real harm.

"Let's assess the hackable-car threat with clarity, with nuance—and with all the facts," Pogue writes. On that, at least, we can agree.