Skip to main content

Facebook should do more to stop malicious propaganda videos

Facebook should do more to stop malicious propaganda videos

/

Why its current approach isn’t working

Share this story

Speaker Nancy Pelosi Holds Her Weekly Press Conference
House Speaker Nancy Pelosi (D-CA) speaks during her weekly news conference on May 23rd in Washington, DC. 
Photo by Mark Wilson/Getty Images

I.

On Friday, the Washington Post reported that a video purporting to show House Speaker Nancy Pelosi slurring her words was racking up millions of views and shares on social networks, with Facebook leading the way on engagement. In reality, the (still unknown) creator of the video had slowed footage of Pelosi to 75 percent the speed of the original, while adjusting the pitch of her voice to make it sound more natural. The result was catnip for conservative partisans eager to paint the congresswoman as a drunken buffoon.

The video’s rapid spread around the internet sparked new fears that our politics were on the cusp of being radically and irreversibly changed by the introduction of digitally altered propaganda. Over the weekend, the situation generated an extraordinary amount of commentary — on what it suggests about our future, and on what social networks should do about it.

Facebook ran its standard misinformation playbook, labeling the video as false and offering anyone who tried to share the video an opaque pseudo-warning letting the user know that there is “additional reporting available.” Monika Bickert, who is in charge of policy at Facebook, went on Anderson Cooper 360 to defend this approach.

Cooper asked Bickert why Facebook kept the video up. As Ian Bogost recounts in The Atlantic:

This line of thinking seemed to perplex Cooper, and rightly so. Why would an immediate impact, such as inciting violence in an acute conflict, be wrong, but a deferred impact, such as harming the reputation of the woman who’s third in line for the presidency, be okay?

Once the content exists, Bickert implied, the company supports it as a tool to engender more content. “The conversation on Facebook, on Twitter, offline as well, is about the video being manipulated,” Bickert responded, “as evidenced by my appearance today. This is the conversation.” The purpose of content is not to be true or false, wrong or right, virtuous or wicked, ugly or beautiful. No, content’s purpose is to exist, and in so doing, to inspire “conversation”—that is, ever more content.

Meanwhile, Axios said the video had ushered in “our sad, new, distorted reality.” Charlie Warzel said Facebook had become a perfect machine for hijacking our attention. Kara Swisher said the incident shows “how expert Facebook has become at blurring the lines between simple mistakes and deliberate deception, thereby abrogating its responsibility as the key distributor of news on the planet.” Joshua Topolsky encouraged people to delete Facebook until it becomes willing to make editorial judgment calls.

And on the opposite side, commentators worried about a world in which platforms make editorial decisions with no recourse available to those whose speech is deemed out of bounds. “A lot of the commentary about the Pelosi video is ‘not even wrong’, as it does not put forward any consistent or realistic enforcement standard other than ‘take down stuff I don’t like,’” said Alex Stamos.

II.

While all of this was playing out, a Bay Area TV station KTVU reported on the story of Kate Kretz, an artist who sews Make America Great Again hats into hate speech symbols, such as a Ku Klux Klan hood or Nazi armband. Kretz’s work is intended as a protest of the Trump Administration’s racist policies, but earlier this month, Facebook removed her work for violating its community guidelines against hate speech:

In early May, Facebook removed Kretz’ images of her latest work for violating community standards. The artist protested, re-uploaded her images, but this time with a disclaimer stating that her art was not hate speech, and in fact was commentary on hate speech, much like a political cartoon.

Then Facebook disabled her account. 

In both the Pelosi and the Kretz cases, we find people altering artifacts of political speech in an effort to influence our politics. Both are protected under the First Amendment. Whether they are protected under Facebook’s community guidelines is more debatable. The spirit of Facebook’s rules would seem to exclude a distorted propaganda video, and to include photos of some fairly literal political art. But in practice, Facebook made the opposite judgment.

The reason is that, for all of the consequences it has on politics, Facebook is determined to stay above the fray. (Or maybe right next to the fray, where people might more easily post about it on Facebook.) The company doesn’t understand the difference between a propaganda video and a piece of art because, in a very serious way, it does not want to. To understand would be to take on expensive new responsibilities, and open itself up to new lines of political attack, at a time when it faces significant new regulatory threats around the world.

Among top Facebook executives, this posture of strained neutrality is the only one that feels possible, whatever brickbats it may face in the press as a result. A policy that enables the maximum amount of political speech, save for a small number of exemptions outlined in a publicly posted document, has a logical coherence that “take down stuff I don’t like” does not.

That’s one reason why the take-it-down brigade might consider developing an alternate set of Facebook community standards for public consideration. I have no doubt that there are better ways to draw the boundaries here — to swiftly purge malicious propaganda, while promoting what is plainly art. But someone has to draw those boundaries, and defend them.

Alternatively, you could break Facebook up into its constituent parts, and let the resulting Baby Books experiment with standards of their own. Perhaps WhatsApp, stripped of all viral forwarding mechanics, would find a slowed-down Pelosi video acceptable when shared from one friend to another. Meanwhile Instagram would rapidly detect the video’s surging popularity and ensure that nothing like it appeared on the app’s Explore page, where the company could unwittingly aid in its distribution the way Facebook’s News Feed algorithm did this time around. Making communities smaller can make it easier to craft rules that fit them.

In the meantime, TED’s Alexios Mantzarlis offers four good suggestions for Facebook to implement, which I’d like to echo here in my own words. It should act faster — if centralization is the company’s big virtue, it should use that power to detect videos like these and apply fact-checking resources before they get millions of views. Two, it should write its warning pop-ups in plain English. Say goodbye to “additional reporting is available,” and hello to “this video has been distorted to change its meaning.” Three, follow up users with videos who shared the post before you identified it as fake, and offer them the chance to un-share it. And finally, share more data with the public and with researchers on the effectiveness of fact-checking.

I don’t think the Pelosi video heralds the end times for our information sphere. But I do think that debates like this, over what Facebook leaves up and what it takes down, are only going to grow more fractious as bad actors find new ways to hijack our attention. I understand why Facebook wants to avoid making editorial judgments on political videos. But doing nothing is an editorial judgment, too — and one that social platforms are increasingly going to be held to account for.

Democracy

Facebook and Twitter disable new disinformation campaign with ties to Iran

“The disabled accounts include two on Twitter [that] mimicked Republican congressional candidates in order to push pro-Iranian political messages,” Tony Romm reports:

Facebook and Twitter each said on Tuesday they had disabled a sprawling disinformation campaign that appeared to originate in Iran, including two accounts on Twitter that mimicked Republican congressional candidates and may have sought to push pro-Iranian political messages.

Some of the disabled accounts appeared to target their propaganda at specific journalists, policymakers, dissidents and other influential U.S. figures online. Those tactics left experts fearful that it could mark a new escalation in social-media warfare, with malicious actors stealing real-world identities to spread disinformation beyond the web.

Facebook facing most probes by Irish data regulator

A year after the General Data Protection Regulation went into effect, Facebook has faced the bulk of investigation, Matthew Wall reports:

Social media giant Facebook and its subsidiaries Instagram and WhatsApp have been the subject of most data investigations in the Republic of Ireland since the European Union’s new data protection regulation came into force a year ago. […]

Ireland’s Data Protection Commission says it has launched 19 statutory investigations, 11 of which focus on Facebook, WhatsApp and Instagram.

GDPR After One Year: Costs and Unintended Consequences

Alec Stapp has more data from the first year of GDPR:

€55,955,871 in fines

€50 million of which was a single fine on Google

281,088 total cases

144,376 complaints

89,271 data breach notifications

47,441 other

37.0% ongoing

62.9% closed

0.1% appealed

Facebook’s Zuckerberg ignores subpoena from Canadian parliament, risks being held in contempt

I find myself fairly sympathetic to tech executives avoiding these public shaming situations, since for the past two years they have resulted in nothing but tedious grandstanding. Still, each refusal generates a fresh round of negative headlines. Donie O’Sullivan and Paula “No Relation” Newton:

Facebook’s Mark Zuckerberg and Sheryl Sandberg did not attend a hearing in Ottawa on Tuesday, despite receiving summonses from the Canadian parliament.

The decision could result in the executives being held in contempt of parliament, the senior Canadian politician who sent the summons told CNN. The last time a member of the public was held in contempt by the parliament was 1913, according to the legislature’s records.

What I Learned Trying To Secure Congressional Campaigns (Idle Words)

Maciej Ceglowski has a very funny essay about working on political campaign security:

Trying to secure a modern campaign is like doing surgery with a scalpel made out of anthrax spores. At some point you will throw down the anthrax scalpel and say “this is impossible!”, as it disappears in a puff of lethal dust. But the patient still needs you!

The Video Game PUBG Went Viral Across India. Then Police Started Arresting Its Young Players.

Fascinating Pranav Dixit piece about how the multiplayer shooter game Player Unknown’s Battlegrounds came to be outlawed in parts of India. It could be spillover from Indians’ concerns about WhatsApp and other social technologies, he reports:

Playing a video game seemed like dubious grounds for arrest to PUBG fans and free internet advocates, but less than a week later, other parts of Gujarat, including Ahmedabad, the state’s largest city, and Vadodara, the third-largest city, had banned the game, citing similar reasons.

The national hysteria around PUBG is unfurling at a moment when Indians are struggling with fallout from rapid technological progress: the deadly spread of rumors on WhatsApp, rampant harassment on social media, and dangerous misinformation campaigns. People now demand that tech companies grapple with their effects on users — and yet the particular panic around PUBG and the resulting arrests in Gujarat reflect lawmakers’ blunt response when forces they see as destabilizing sweep in. Video game bans are not unfamiliar; but arresting young men for playing them, to safeguard “the education of children and youth,” is a severe and questionable method of protecting the interests of young adults.

How China Uses High-Tech Surveillance to Subdue Minorities

I put stories about China and facial recognition in here in part because this is the world all of us are going to live in unless other countries start regulating this sort of thing. Unsettling piece from Chris Buckley and Paul Mozur. (And while we’re at it, here’s another chilling facial recognition project from a Chinese programmer.)

A God’s-eye view of Kashgar, an ancient city in western China, flashed onto a wall-size screen, with colorful icons marking police stations, checkpoints and the locations of recent security incidents. At the click of a mouse, a technician explained, the police can pull up live video from any surveillance camera or take a closer look at anyone passing through one of the thousands of checkpoints in the city.

To demonstrate, she showed how the system could retrieve the photo, home address and official identification number of a woman who had been stopped at a checkpoint on a major highway. The system sifted through billions of records, then displayed details of her education, family ties, links to an earlier case and recent visits to a hotel and an internet cafe.

China’s robot censors crank up as Tiananmen anniversary nears

Speaking of China, TikTok’s parent company is helping the government out with a censorship campaign. Cate Cadell reports:

“We sometimes say that the artificial intelligence is a scalpel, and a human is a machete,” said one content screening employee at Beijing Bytedance Co Ltd, who asked not to be identified because they are not authorized to speak to media.

Two employees at the firm said censorship of the Tiananmen crackdown, along with other highly sensitive issues including Taiwan and Tibet, is now largely automated.

Behind Grindr’s doomed hookup in China, a data misstep and scramble to make up

Echo Wang and Carl O’Donnell have more details on why the US government is trying to undo the acquisition of a gay hookup app by a Chinese company:

Two former national security officials said the acquisition heightened U.S. fears about the potential of data misuse at a time of tense China-U.S. relations. CFIUS has increased its focus on safety of personal data. In the last two years, it blocked Chinese companies from buying money transfer company MoneyGram International Inc and mobile marketing firm AppLovin.

Based in West Hollywood, California, Grindr is especially popular among gay men and has about 4.5 million daily active users. CFIUS likely worried that Grindr’s database may include compromising information about personnel who work in areas such as military or intelligence and that it could end up in the hands of the Chinese government, the former officials said.

Elsewhere

Google’s Shadow Work Force: Temps Who Outnumber Full-Time Employees

Daisuke Wakabayashi examines some of the consequences that Google’s massive shift to contract labor has had on the company. More contractors now work for Google than full-time employees:

The reliance on temporary help has generated more controversy inside Google than it has at other big tech outfits, but the practice is common in Silicon Valley. Contingent labor accounts for 40 to 50 percent of the workers at most technology firms, according to estimates by OnContracting, a site that helps people find tech contracting positions.

OnContracting estimates that a technology company can save $100,000 a year on average per American job by using a contractor instead of a full-time employee.

“It’s creating a caste system inside companies,” said Pradeep Chauhan, who runs OnContracting.

Germany’s biggest publisher sales houses unite to fight Google, Facebook and Amazon

“Four big German publisher sales houses are collaborating in order to fight the market power of tech platforms,” Jessica Davies reports. The idea is to better compete with Big Tech by aggregating more premium ad inventory:

The two new partners house big news titles including Bild, Welt, Business Insider and magazine portfolios including Die Aktuelle. The additions will increase the online reach of the alliance to a combined 50 million monthly unique users, according to Germany’s industry online measurement body AGOF. Facebook has approximately 40 million monthly unique users in Germany, according to Statista.

China’s ByteDance plans to develop its own smartphone ($)

No one the Financial Times talks to here seems to be very optimistic about the prospects of a TikTok phone. But I wasn’t very optimistic about ByteDance killing off the Musical.ly brand and launching TikTok, either!

Apple promises privacy, but iPhone apps share your data with trackers, ad companies and research firms

Geoffrey Fowler found that Citizen, a social network based on scaring you about sirens going off in your neighborhood, violated its own privacy policy by sending your real-time location to a data-mining company:

Citizen, the app for location-based crime reports, published that it wouldn’t share “your name or other personally identifying information.” Yet when I ran my test, I found it repeatedly sent my phone number, email and exact GPS coordinates to the tracker Amplitude.

After I contacted Citizen, it updated its app and removed the Amplitude tracker. (Amplitude, for its part, says data it collects for clients is kept private and not sold.)

Two days with Curvy Wife Guy, the most controversial man in body positivity

Absolutely delightful profile of Robbie Tripp, aka the Curvy Wife Guy, from Rebecca Jennings. In part, it’s about how everything about being an influencer is real and a put-on at the same time:

All of which is to say that Robbie Tripp — who in the nearly two years since the viral post has courted the spotlight in various ways, including most recently releasing a “curvy girl hip-hop anthem” and accompanying music video — has become a sort of avatar for multiple internet phenomena wrapped in one: the debatably “woke” male feminist, the Instagram hustler, the TED talker, the online wife-haver, the milkshake duck. He’s a viral meme who stumbled into a much larger discourse and is still finding his place within it. But he is determined to carve out space for himself, despite whatever gets written about him. Toward the very end of the two days I spent with him, Robbie told me, “I have a motto: that whatever people hate you for, do more of that.”

’I replied to a genuine bank tweet and lost £9,200 to a fraudster’

Here’s a good one for the Never Tweet files, from Anna Timms:

The scam began with a genuine tweet from the bank asking customers to share their experience of its customer service in an online survey.

Johnson’s business partner tweeted back to report the difficulties setting up the new account. The fraudster saw her tweet, Googled her details and called her via her company contact number posing as a Metro Bank customer service operative called “Neil”.

She was told that the call was in response to her tweet, and that the bank wanted to rectify the poor service and get the new business account set up immediately. She was asked for details of the business as part of due-diligence checks required by the banking regulator and she named Johnson as a co-director.

Launches

To Fight Deepfakes, Researchers Built a Smarter Camera

Lily Hay Newman writes about an effort to make cameras tamper-proof:

The NYU team demonstrates that you could adapt the signal processors inside—whether it’s a fancy DSLR or a regular smartphone camera—so they essentially place watermarks in each photo’s code. The researchers propose training a neural network to power the photo development process that happens inside cameras, so as the sensors are interpreting the light hitting the lens and turning it into a high quality image, the neural network is also trained to mark the file with indelible indicators that can be checked later, if needed, by forensic analysts.

“People are still not thinking about security—you have to go close to the source where the image is captured,” says Nasir Memon, one of the project researchers from NYU Tandon who specializes in multimedia security and forensics. “So what we’re doing in this work is we are creating an image which is forensics-friendly, which will allow better forensic analysis than a typical image. It’s a proactive approach rather than just creating images for their visual quality and then hoping that forensics techniques work after the fact.”

Takes

Why Fiction Trumps Truth

Sapiens author Yuval Noah Harari investigates why humans are so smart and so stupid at the same time:

The dual nature of power and truth results in the curious fact that we humans know many more truths than any other animal, but we also believe in much more nonsense. We are both the smartest and the most gullible inhabitants of planet Earth. Rabbits don’t know that E=MC² , that the universe is about 13.8 billion years old and that DNA is made of cytosine, guanine, adenine and thymine. On the other hand, rabbits don’t believe in the mythological fantasies and ideological absurdities that have mesmerized countless humans for thousands of years. No rabbit would have been willing to crash an airplane into the World Trade Center in the hope of being rewarded with 72 virgin rabbits in the afterlife.

Harvard Professor Falls Victim to Group Outrage

Cass Sunstein suggests that we use the word “lapidation” to describe online hate mobs:

The English language needs a word for what happens when a group of people, outraged by some real or imagined transgression, responds in a way that is disproportionate to the occasion, thus ruining the transgressor’s day, month, year or life.

We might repurpose an old word: lapidation.

Technically, the word is a synonym for stoning, but it sounds much less violent. It is also obscure, which makes it easier to enlist for contemporary purposes.

Mark Zuckerberg should hire Microsoft’s Brad Smith as CEO, says former Facebook security chief

Alex Stamos says Facebook needs someone else to serve as CEO while Zuckerberg puts his focus elsewhere:

“There’s a legit argument that he has too much power,” said Stamos, who left the company in 2018, at the Collision Conference in Toronto. “He needs to give up some of that power. If I was him, I would go hire a new CEO for the company.”

Pl@ntNet is the world’s best social network

Michael J. Coren is obsessed with a social network for plants. You upload a photo of a plant around you, and people around the world will help you identify it. It’s part of a research project that’s creating public-domain machine vision models and identifying new species:

I can now separate the wild radish from the more roguish sea radish. Delineate between the thimbleberry and the European dewberry. Identify a specimen based on the the hue of a petal or the serration of a leaf. At a glance, I can tell between three kinds of forget-me-nots (field, broadleaf, and woodland), or distinguish between a common yarrow and a high mallow. This winter, after weeks of rain watered a carpet of leeks and miner’s lettuce, I collected the ingredients for a wild pesto and salad in the local parks. Pl@ntNet is a tireless tutor, constantly adjusting and correcting my observations; it’s as exciting for me as learning a new language.

And finally ...

Talk to me

Send me tips, comments, questions, and videos in which my speech has been slowed down to make me seem drunk: casey@theverge.com.