Skip to main content

Why platforms aren’t taking down deceptive political videos

Why platforms aren’t taking down deceptive political videos

/

Not every meme is an epistemological crisis

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

U.S. Speaker of the House Rep. Nancy Pelosi (D-CA) speaks during her weekly news conference last week
U.S. Speaker of the House Rep. Nancy Pelosi (D-CA) speaks during her weekly news conference last week
Photo by Alex Wong/Getty Images

Say you run a large social network in which your most zealous users frequently discuss their politics. In 2020, one way they are going to do this is through the sharing of memes — pithy, punchy photos and videos designed for maximum partisan impact. Some of these memes will draw on actual facts; others will simply be insults. The most troublesome memes to deal with will be the ones that draw on real life but manipulate it in some way. These manipulations can be an essential part of satire, parody, and criticism. They can also trick people into believing a hoax. It’s up to you to draw a bright line. Where do you draw it?

The question of manipulated media has come up in a big way twice in the past week. The first came when Twitter said it would label some manipulated and synthetic images starting next month. Here’s Adi Robertson in The Verge:

Twitter will ban faked pictures, video, and other media that are “deceptively shared” and pose a serious safety risk. The company just announced a new policy on synthetic and manipulated media — a category that encompasses sophisticated deepfake videos, but also low-tech deceptively edited content. In addition to banning egregious offenders, Twitter will label some tweets as “manipulated media” and link to a Twitter Moment that provides more context. 

The second appearance of manipulated media in the headlines came after the State of the Union address, when President Trump shared on his Twitter account a video that purported to show House Speaker Nancy Pelosi tearing up his speech during a series of feel-good moments during the speech. Here’s Drew Harwell and Tony Romm in the Washington Post

The viral video shows President Trump delivering his State of the Union address, with a very notable alteration. As he commemorates “Young Women Receiving Scholarships” and “Child Healthcare Successes,” the video repeatedly cuts away to House Speaker Nancy Pelosi ripping up her copy of the speech.

It didn’t actually happen that way: Pelosi (D-Calif.) tore the pages only after Trump finished what she later called his “manifesto of mistruths.” But Trump on Thursday shared it anyway, sending it to millions of users on Facebook and Twitter — and sparking sharp criticism from Pelosi and her fellow Democrats, who labeled the video “doctored” and “fake,” and demanded that the sites remove it. The companies refused.

This was, of course, the second time that a doctored video of Pelosi made national headlines, following the incident last May in which a video of her appearing drunk went viral. (In reality, the video’s creator had simply slowed her speech to 75 percent of its original speed.)

Lying has a long tradition in American politics. So why have the Pelosi videos created a panic? One, they erode our shared sense of reality by throwing into question the legitimacy of video evidence, a technology that before now we have generally regarded as trustworthy. And two, they suggest that in the future we will be unable to reliably tell fact from fiction, particularly on matters of intense public debate. (I think there’s also probably a third fear here: that huge numbers of people will be misled into voting for the “wrong” candidate because they fell for one or more hoaxes.)

Hence Pelosi’s spokesman calling for the doctored State of the Union video to be removed from Facebook. Here’s Jeff Horwitz and Natalie Andrews in the Wall Street Journal:

Disagreements over the video triggered a spat on Twitter on Friday between Drew Hammill, Mrs. Pelosi’s deputy chief of staff, and Andy Stone, a longtime Facebook spokesman. Mr. Hammill urged Facebook and Twitter to take down the video because it was “deliberately designed to mislead and lie to the American people.”

To that, Mr. Stone responded, “Sorry, are you suggesting the President didn’t make those remarks and the Speaker didn’t rip the speech?”

Eight minutes later, Mr. Hammill shot back: “what planet are you living on? this is deceptively altered. take it down.”

This is a perfect American debate over platforms in 2020, because it involves two people talking past one another without acknowledging any of the relevant tradeoffs, on a platform that rewards them for it with digital hearts.

Still, in this case, I’m with Facebook and Twitter — this video should not be removed from the internet. As Stone notes, Pelosi did rip up Trump’s speech on camera — and she did not appear to avoid tearing up the nice bits where Trump praised a soldier or handed out a scholarship. In fact, the whole point of tearing up the speech on camera was for the act to be widely viewed and discussed. It’s odd to engineer a moment like this one, purpose built for social media, and then try to get a meme of it taken down.

Pelosi’s people argue that showing the clips out of order represents an unacceptable distortion. But the video clearly re-uses the clip of Pelosi tearing the speech multiple times, making the fact that it’s a chop job self-evident. Viewed in that light, Hammill’s complaint reads more like film criticism than a call for platform policy reform.

The truth is that there’s likely no way to draw a line requiring the Pelosi video to be taken down that would also permit the kind of political speech we see every day on television. Any criticism that doesn’t reckon with that fact strikes me as fundamentally glib.

Of course, it’s also the case that political discourse on television — particularly cable television — is often terrible. A platform can embody high ideals of free speech and still be a pretty terrible place to become informed. It would be good for the country if, on that metric at least, Facebook and Twitter aimed much higher.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending up: Amazon is banning books by white supremacists and Nazis. The move has prompted some booksellers to complain about the company’s vague or nonexistent rules regarding what they can sell, though few people are mourning these titles.

🔽 Trending down: Legal documents show Facebook knew about a huge security flaw that let hackers to steal personal data from millions of its users almost one year before the crime. The company failed to fix it in time.

Governing

Senator Josh Hawley (R-MO) proposed a new plan to overhaul the Federal Trade Commission in order to rein in big tech companies. “The FTC has stood by as major corporations have consolidated their power and stifled competition,” he wrote. Russell Brandom at The Verge reports:

Tasked with protecting consumers, the FTC has been the source of significant frustration for antitrust advocates in recent years. Existing law prevents the commission from levying fines for initial violations, as in the Cambridge Analytica case. When fines are enacted, as in Facebook’s recent $5 billion fine, they’re often seen as insufficient. As a result, a number of recent privacy bills have included measures to strengthen the FTC’s powers.

Hawley’s proposal goes beyond previous efforts, essentially remaking the agency from the ground up. The proposal calls for the FTC to operate within the Department of Justice, run by a single Senate-confirmed director, rather than its current panel of five commissioners, as a way to render it more immediately responsive to congressional oversight. Hawley would also establish a “digital market research section” specifically to scrutinize tech platforms.

Amazon is trying to depose President Trump and Secretary of Defense Mark T. Esper in a high-stakes protest over the Pentagon’s handling of a $10 billion cloud computing contract. They’re looking to question Trump over any communications he’s had with Microsoft, the company that eventually won the contract. (Aaron Gregg and Jay Greene / The Washington Post)

Law enforcement agencies are using Clearview AI to identify children who are victims of child abuse. The use case raises new questions about the tool’s accuracy and how the company handles data. (Kashmir Hill and Gabriel J.X. Dance / The New York Times)

Elsewhere, Clearview AI founder Hoan Ton-That told CNN that he is “honored” to kick off a broader conversation about facial recognition and privacy. It is not an honor! No one is honoring you here, Hoan. (Donie O’Sullivan / CNN)

Facebook and the Internal Revenue Service are squaring off in a court case that could cost the company more than $9 billion. The IRS has argued that more of Facebook’s profits should have been taxed at higher rates in the United States, rather than in the company’s Irish subsidiary. (Richard Rubin / The Wall Street Journal)

Facebook, in an attempt to root out misinformation on its platform, almost inadvertently published incorrect information about a voter registration deadline in Oklahoma this year. State officials said they had to fight with the company in order to get the language corrected. (Dustin Volz and Alexa Corse / The Wall Street Journal)

QAnon, the lunatic pro-Trump conspiracy theory about “deep state” traitors plotting against the president, has migrated off the internet. It’s showing up in political campaigns, criminal cases and a college classroom. (Mike McIntire and Kevin Roose / The New York Times)

Volunteers for the Nevada State Democratic Party encountered errors while testing their version of the app that ruined the Iowa caucuses. The party has since decided not to use the app. (Joseph Cox / Vice)

Bernie Sanders is raising more money from Big Tech employees than any other 2020 presidential candidate. Employees from Amazon, Apple, Facebook, Google, and Twitter funneled almost $270,000 into the Sanders campaign during the last three months of 2019. (Theodore Schleifer / Recode)

Presidential candidate Mike Bloomberg is paying influencers to make him look cool on social media. His campaign is asking people with 1,000 to 100,000 followers to create original content “that tells us why Mike Bloomberg is the electable candidate who can rise above the fray, work across the aisle so ALL Americans feel heard & respected.” Feels natural! (Scott Bixby / Daily Beast)

A small community of online sleuths is trying to combat misinformation by spotting it before it can spread. Ben Nimmo, who helped create the Atlantic Council’s Digital Forensic Research Lab, is pioneering the disinformation investigations. He’s profiled here. (Adam Satariano / The New York Times)

The coronavirus has brought China’s surveillance technology out of the shadows, providing the authorities with a justification for sweeping methods of high tech control. AI companies say their systems can scan the streets for people with even low-grade fevers and recognize their faces even if they are wearing masks. (Yingzhi Yang and Julie Zhu / Reuters)

Industry

Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos, is trying to expand to 22 countries around the world. Its list of target markets includes multiple authoritarian regimes. BuzzFeed’s Caroline Haskins, Ryan Mac and Logan McDonald have the story:

A document obtained via a public records request reveals that Clearview has been touting a “rapid international expansion” to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses.

The document, part of a presentation given to the North Miami Police Department in November 2019, includes the United Arab Emirates, a country historically hostile to political dissidents, and Qatar and Singapore, the penal codes of which criminalize homosexuality.

In December, Facebook quietly acquired the company behind “Papers With Code,” a free resource that helps people track newly published machine learning papers with source code. The deal was estimated to have been around $40 million. (Steve O’Hear / TechCrunch)

Facebook is expanding its bug bounty program in an effort to correct security flaws on the platform. A few months ago, a bug bounty submission alerted the company that apps were siphoning data from up to 9.5 million of its users. (Lily Hay Newman / Wired)

Facebook’s comments plugin, which was built to let users leave comments on websites with their Facebook accounts, promises to help deliver “higher quality conversations” across the internet. Instead, it has prompted a wave of spam across popular websites. (Rob Price / Business Insider)

Instagram added a new feature to let you sort through the accounts that you’re following by “Most shown in feed” and “Least interacted with.” From there, you can manage your follow status and notifications, or mute an account. (Dami Lee / The Verge)

YouTube’s top kids channel, Cocomelon, gets 2.5 billion views a month — and now it’s expanding into merchandise. It’ll soon begin offering albums and toys to its toddler superfans. (Mark Bergen and Lucas Shaw / Bloomberg)

Snapchat’s developer platform is blowing up as a gateway to acquire teenage users for other apps. Hoop, the latest Snap Kit success story, is the second most downloaded app on the App Store, thanks to its Tinder-esque swiping interface for finding new friends. (Josh Constine / TechCrunch)

Amazon has considered selling Twitch’s live-streaming technology as a service through Amazon Web Services. If it moves ahead with the offering, it would be the latest example of the company selling technology it uses internally to customers. (Priya Anand and Jessica Toonkel / The Information)

Jeff Bezos is reportedly scouting for homes in the Los Angeles area that cost as much as $100 million. This would rank among the largest purchases of residential real estate in California history. (Theodore Schleifer / Recode)

And finally...

The German Teens Who Made That Iconic TikTok Video Think It Might Be Nice For You To Learn Their Language

People are losing it over these German teens’ TikTok, in which a flamboyant young man asks his friends how many boyfriends they have had, even though they likely didn’t understand it in German. Here are Olivia Niland and Lam Thuy Vo in BuzzFeed:

The teens, who live in Cologne, Germany, and wanted to be identified by their TikTok handles, told BuzzFeed News they’ve been making videos for about six months.

They said they believe they’ve amassed millions of followers on the app because they are “different from other people in Germany.”

”In Germany, people are sometimes afraid to fully be themselves,” @hussainchillt said. “I am who I am. We are who we are and maybe that gives people permission to be themselves, too.”

Oh mein Gotttttttt.

Talk to us

Send us tips, comments, questions, and memes of you dramatically tearing up the State of the Union address: casey@theverge.com and zoe@theverge.com.