Future Tense

Congress Wants to Solve Deepfakes by 2020

That should worry us.

House Speaker Nancy Pelosi holds her hands out while speaking during her weekly press conference at the Capitol in Washington.
House Speaker Nancy Pelosi on Thursday in Washington Win McNamee/Getty Images

Deepfakes are the latest weapon in the war against truth, and Congress is paying attention. The technology allows anyone to create convincing videos of events that never happened, stoking fear that a deepfake could emerge that fuels political divisions, provokes violence, or targets individuals. Indeed, the technology has already been used to create nonconsensual pornography. But it is the fear of a deepfake disrupting the 2020 presidential election that is propelling Congress into action.

The first federal bill targeted at deepfakes, the Malicious Deep Fake Prohibition Act, was introduced in December 2018, and the DEEPFAKES Accountability Act followed this June. Legislation targeting deepfakes has also been introduced in several states, including California, New York, and Texas. During a House Intelligence Committee hearing on the subject in June, legislators signaled that more governance is coming, likely in the form of social media regulation.

Deepfakes are frightening, but so is Congress’ rush to regulate them. Legislation requires careful deliberation, particularly when it is targeted at an emerging technology. This is particularly true where, as here, there are positive uses for the technology, such as entertainment and satire, that come with strong First Amendment protections. For a legislative solution to work, it would need to balance these factors and account for the fact that the technology—and likely the way it is used—will continue to evolve. Shortcutting this process risks enacting laws that not only fail their policy goals, but threaten First Amendment interests.

Congress’ haste is written all over the two bills already introduced. The Malicious Deep Fake Prohibition Act, for example, would make it a federal crime to create or distribute a deepfake when doing so would facilitate illegal conduct. In other words, the conduct prohibited under this proposed law is already prohibited under current laws. The bill does nothing to reduce the risk of deepfakes, it just toughens the punishment. (Although this bill expired at the end of 2018, Sen. Ben Sasse’s office reports that he intends to reintroduce it.)

The DEEPFAKES Accountability Act does not fare much better. It would require mandatory watermarks and clear labeling on all deepfakes—a step that is likely to be ignored by those whose entire purpose is to weaponize a deepfake. The bill broadly defines deepfakes as any media that falsely “appears to authentically depict any speech or conduct of a person,” and produced substantially by “technical means.” This expansive definition could sweep up certain protected speech, particularly because the bill stumbles through its exceptions (such as entertainment and parody), subjecting it to First Amendment challenges. Oh, and it exempts officers and employees of the United States who create deepfakes in furtherance of public safety or national security.

Even if Congress crafted a perfectly tailored bill balancing the threat of deepfakes and First Amendment interests, it might not make a difference. Laws are unlikely to deter those outside the jurisdiction of U.S. courts (say, a foreign power intending election interference) or those who have the technological ability to remain anonymous. And those sophisticated enough to engage in online criminal activity often have the ability to remain anonymous. If perpetrators can avoid detection, and thus sanctions, even the most narrowly tailored law would be of minimal consequence. To really have an impact, the law would need an enforcement mechanism—some way to identify and target those responsible for its creation or distribution.

This explains the rationale behind Congress’ latest and most dangerous idea: combating deepfakes through social media regulation. It’s not hard to see why lawmakers see this as the answer—social platforms are the most likely distribution channels for deepfakes. The power of deepfakes to cause harm on a broad scale exists only because social platforms provide a means for instant and widespread dissemination. This, after all, is what enabled them to serve as incubators for disinformation campaigns in the 2016 presidential election and beyond. They continue to face public and political backlash for how they mishandle false information.

Consider Facebook’s recent refusal to remove an altered video of House Speaker Nancy Pelosi that had been manipulated to appear that Pelosi was slurring her speech, seemingly intoxicated. Even after it was identified as false, Facebook opted to keep the video online, explaining that its rules do not prohibit posting false information. Instead, Facebook said it would reduce how often the video appeared in news feeds and alert users who share the video it had been identified as false. Of course, by the time Facebook took any action, the video had been viewed millions of times. The damage was done.

The Pelosi video was not a deepfake (dubbed instead a “cheapfake” because it was distorted with a simple editing technique), but it highlighted just how fast a deepfake could spread, and how social platforms exercise total discretion in deciding whether and how to respond. This discretion is thanks to a powerful federal law, Section 230 of the Communications Decency Act, that shields social platforms from civil liability stemming from its users’ posts. This law is the very reason why social platforms (and any site that relies on content provided by users) can exist. The Electronic Frontier Foundation calls Section 230 “one of the most valuable tools for protecting freedom of expression and innovation on the Internet,” in part because it does not protect only large companies—smaller and newer companies that lack the resources to challenge lawsuits based on posts by their users rely on this protection, and many could not exist without it.

Naturally, this is where Congress wants to apply pressure. During the June hearing on deepfakes, House Intelligence Committee Chairman Adam Schiff, a Democrat from California, said, “If the social media companies can’t exercise the proper standard of care when it comes to a variety of fraudulent or illicit content, then we have to think about whether that immunity still makes sense.” As with the first two bills introduced to combat deepfakes, abridging the protections of Section 230 would be shortsighted. The most obvious—and immediate—negative outcome would be massive automated content filtration and blocking by social platforms in order to decrease their liability exposure. It might reduce the possibility of deepfakes spreading on social platforms, but it would also reduce a variety of other speech, diminishing the vitality of public and political discourse.

And the effort could prove counterproductive. The great irony that legislators miss when threatening to amend or remove Section 230 is that it is in place precisely to encourage websites to reduce harmful content. The protections of Section 230 give platforms the space to experiment with different ways of moderating content without fear of liability if they do not get it exactly right. This is why Facebook can develop an algorithm to reduce the frequency of posts that have been flagged as fake, or why Twitter can experiment with filtering tweets that violate its rules but might be of public interest, “to strike the right balance between enabling free expression, fostering accountability, and reducing the potential harm caused by these Tweets.”

Instead of spinning their legislative wheels to enact something by 2020, lawmakers must confront that deepfakes are a symptom of a much larger problem: the weaponization of disinformation. Legislation truncating Section 230 or requiring watermarks does not address this concern—it just creates new barriers to free expression. Furthermore, existing laws can be enforced to respond to many of the harms posed by deepfakes. (Photo and video editing techniques have survived for decades without targeted regulation.) Lawmakers should fill in only the necessary gaps, such as ensuring that revenge pornography laws would apply to deepfake content. In the meantime, arming ourselves against disinformation will require a combination of media literacy and corporate social responsibility—something that Congress can’t force by 2020.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.