Skip to main content

Three unanswered questions about threats and hoaxes on Facebook

Three unanswered questions about threats and hoaxes on Facebook

/

The debate over Infowars has hit an impasse

Share this story

Illustration by Alex Castro / The Verge

Another day, another high-profile outrage spreading virally on Facebook. This time around it’s our frequent subject here Alex Jones, of Infowars, who yesterday went on a rant in which he tiptoed very close to the line of calling for violence against special counsel Robert Mueller. Charlie Warzel has the details in BuzzFeed:

On his Monday afternoon show, Jones issued a prolonged rant against special counsel Robert Mueller, accusing him of raping children and overseeing their rape, and then pantomiming shooting the former FBI director. The show was streamed live on Jones’ personal, verified Facebook page, which has nearly 1.7 million likes.

In the clip, Jones baselessly accused Mueller of having sex with children. “They’d let Mueller rape kids in front of people, which he did,” he said on the show.

Facebook told Warzel the rant did not amount to a credible threat of violence, and left the post up. It had about 46,000 views as of this morning.

Later in the day, Facebook held a previously scheduled conference call with reporters to discuss its work on misinformation and elections. Five executives who work on issues including News Feed integrity, security policy, and elections laid out what they’re doing to improve the service. There were no major new announcements, but the question-and-answer period that followed gave reporters a chance to ask about the Infowars issue.

“We know people don’t want to see false information at the top of their News Feed,” said Tessa Lyons, the head of News Feed Integrity. Lyons went on to say that the company believes it has a responsibility to limit the distribution of hoaxes. And, in cases where those hoaxes have created an imminent threat of harm, Facebook — as of last week, in just two countries — will remove it from the platform.

The current debate over Infowars on Facebook, which is now in its third week, has hit a bit of an impasse. Axios tried to move it forward today with two pieces — one, by Ina Fried, surveying media types about what Facebook should do; and another, by Sara Fischer, offering a broader range of solutions for all of Facebook’s news-related problems.

Both pieces are worth reading, even if Fischer’s in particular comes across as rather pessimistic. (“Facebook may not be able to do much more than it has already tried, unless it makes a drastic change that would impact its business and long-term vision.”)

While we wait for a more comprehensive solution, I’d settle for Facebook answering some questions that never quite found answers on today’s call:

  • What data can Facebook share on the subject of misinformation seeing reduced distribution after being labeled as false or ? The company likes to say that posts get 80 percent fewer views on average, but it would be helpful to see numbers for specific pages. Infowars, for example.
  • Fact-checkers say it takes an average of three days before they are able to label a Facebook post as false. Haven’t most posts already gotten the majority of their lifetime views at that point? Doesn’t that make the strategy of “reduced distribution” significantly less effective?
  • Finally, a question from my boss, Nilay Patel. By what standard does Facebook say Jones’ rant against Mueller did not represent a “credible threat of violence?” When courts make such judgements, Nilay notes, they do so by outlining their reasoning and citing the relevant precedents.

“If Facebook wants to run a legal system,” he says, “it should do that too.”

Democracy

Facebook, Trying to Move Forward in China, Registers a Chinese Subsidiary

In another one of its periodic efforts to persuade the Chinese government to let it open up shop there, Facebook is trying to open a “startup accelerator” there and fund it with $30 million. But it’s not exactly clear what’s happening, report Paul Mozur and Sheera Frankel. The corporate registration was removed from a government website, and some references to the accelerator appear to have been censored on social media, they say.

Senator Ron Wyden reckons with the internet he helped shape

My colleague Colin Lecher interviews one of the authors of the Communications Decency Act, and its world-changing Section 230.

WYDEN: We thought it was going to be helpful. We never realized it was going to be the linchpin to generating investment in social media. We envisioned that the law would be both a sword and a shield. A shield so that you could have this opportunity, for particularly small and enterprising operations to secure capital, and then a sword [by allowing them to moderate without facing liability over the practice], which said you’ve got to police your platforms. And what was clear during the 2016 election and the succeeding events surrounding Facebook, is that technology companies used one part of what we envisioned, the shield, but really sat on their hands with respect to the sword, and wouldn’t police their platforms.

WhatsApp: WhatsApp races against time to fix fake news mess ahead of 2019 general elections

WhatsApp is doing lots of outreach to public officials in India amid the current crisis of mob violence, reports Venkat Ananth, who says it’s linked to the delayed effort to get payments approved on the app.

On WhatsApp, fake news is fast — and can be fatal

It’s not just India: WhatsApp is causing problems around the world, report Lizza Dwoskin and Annie Gowan. A new report from Oxford University found disinformation campaigns in at least 10 countries this year, including Brazil, India, Pakistan, Zimbabwe and Mexico.

Russian Hackers Reach U.S. Utility Control Rooms, Homeland Security Officials Say

Facebook wouldn’t cop to Russian interference in the current midterm election campaigns Tuesday. But Homeland Security found that Russian hackers have infiltrated the control rooms of US electrical utilities. “They said the campaign likely is continuing,” Rebecca Smith reports.

Elsewhere

Facebook signs agreement saying it won’t let housing advertisers exclude users by race

Facebook has signed a new, legally binding agreement with the state of Washington agreeing to remove advertisers’ ability to exclude races, religions, sexual orientations, and other protected classes in certain ad-targeting sectors, my colleague Nick Statt reports.

Why Do People Share Fake News? A Sociotechnical Model of Media Effects

Here’s a research paper that would seem to support Mark Zuckerberg’s controversial statement last week that people who share fake news typically believe it is true:

People do not share fake news stories solely to spread factual information, nor because they are “duped” by powerful partisan media. Their worldviews are shaped by their social positions and their deep beliefs, which are often both partisan and polarized. Problematic information is often simply one step further on a continuum with mainstream partisan news or even well-known politicians. We must understand “fake news” as part of a larger media ecosystem. That does not mean that we should ignore platforms; we must scrutinize the ways in which algorithms and ad systems promote or incentivize problematic content, and the frequency with which extremist content is surfaced. Finally, while media literacy and fact-checking efforts are very well-intentioned, they may not be the best solutions, given the highly-polarized, mistrustful political climate of the United States. 

Twitter is banning users who created their accounts while underage

Lots of kids signed up for Twitter before they turned 13. Twitter is hunting them down and locking them out of their accounts, even though they are of age now, reports my colleague Shoshana Woodinsky:

“For a couple of years, I couldn’t actually update my birth year on Twitter. If I tried to select my correct year, 1996, it just would be grayed out,” said Maxwell, a 22-year-old Twitter devotee, who found himself suspended last week. “On Wednesday, I checked again and noticed I could select 1996, but as soon as I saved the change, my account locked.” Though Maxwell has appealed repeatedly, he’s still locked out of the platform — at least for now.

Snap Spectacles Chief Leaves Company

The head of Snapchat Spectacles is the latest to leave Snap, Alex Heath reports.

How Snap Made Direct Response Ads a Big Business

Snap might be down a head of Spectacles, but it has found a working ad format, Tom Dotan reports. They’ve grown to account for around 40% of overall revenue, he says.

Fake #WalkAway Ads Feature Images Of People From Shutterstock

Russian bots are actively promoting the hashtag #WalkAway, which supposedly is used by Democrats who have left the party to become Republicans. It turns out that many of the supposed former Democrats depicted in the campaign’s imagery were bought off Shutterstock.

Mountain View’s unusual rule for Facebook: No free food

This is insane and dumb:

When Facebook moves into its new offices in Mountain View this fall, a signature Silicon Valley perk will be missing — there won’t be a corporate cafeteria with free food for about 2,000 employees.

In an unusual move, the city barred companies from fully subsidizing meals inside the offices, which are part of the Village at San Antonio Center project, in an effort to promote nearby retailers. The project-specific requirement passed in 2014, attracting little notice because the offices were years away from opening.

Pinterest’s head of engineering Li Fan leaves for scooter company Lime

Pinterest’s head of engineering, Li Fan, is leaving the company ahead of its expected IPO.

Facebook is succeeding in spite of itself

Facebook earnings are tomorrow and Wall Street is excited, reports Kurt Wagner:

When Facebook reports Q2 earnings on Wednesday, analysts are expecting — you guessed it — yet another great quarter.

“Despite all the negative headlines, we believe ad revenue should continue to drive very healthy growth,” wrote SunTrust’s Youssef Squali. Analysts think Facebook revenue will grow 43 percent over the same quarter one year ago.

Launches

Twitter Tightens Process for App Developers to Clean Up Site

Twitter removed more than 143,000 apps for violating its policies between April and June, said the company, which as of today is placing more limits on new developers.

Takes

Deepfakes, false memories, and the Mandela effect: AI is coming for our past

We’re underestimating the mind-warping potential of fake video, Brian Resnick says:

We don’t have psychological studies directly looking at the ability of AI-faked video to implant false memories. But researchers have been studying the malleability of our memories for decades.

Here’s what they know: The human mind is incredibly susceptible to forming false memories. And that tendency can be kicked into overdrive on the internet, where false ideas spread like viruses among like-minded people. Which means the AI-enhanced forgeries on the horizon will only make planting false memories even easier.

Were We Destined to Live in Facebook’s World?

Alexis Madrigal talks to Siva Vaidhyanathan, author of a new book called Antisocial Media, about whether Facebook is blinded by data. (It is, Vaidhyanathan says.)

Behaviorism is embedded in Facebook. They’ve been clear about this. Facebook is constantly tweaking its algorithms to try to dial up our positive emotional states, otherwise known as happiness. That’s one of the reasons that they measure happiness to the best of their ability, or so they think. It’s one reason that they’ve run mood changing studies (that they got into trouble for). This is the kind of social engineering that they want to engage in. It’s one of the reasons that they are trying to turn up the dial on the hedonic meter on the whole species. And that lets them ignore the edge cases, and those edge cases can be millions of people. People in Myanmar and Kenya. Women who are stalked and harassed through Facebook and have to rely on a clunky reporting system. The edge cases fall away and only recently has Facebook faced the sort of public scrutiny that has encouraged the company to take these problems seriously.

The information was available four or five years ago, longer in some cases. And they did nothing. But again, when you’re looking at that hedonic meter on your screen and you are seeing that the general happiness of Facebook users might be edging up, you can feel really good about the work you do every day and ignore the horrors on the margins.

And finally ...

A million Facebook users watched a video that blurs the line between bad satire and ‘fake news’

A conservative publisher put together a “satirical” fake interview with New York congressional candidate Alexandria Ocasio-Cortez in which video of her taken from another interview is spliced in against questions designed to make her look stupid. It is a viral hit, and many people think it is real. “Without the disclaimer, it’s indistinguishable from an awkward attempt at smearing a political opponent,” my colleague Adi Robertson reports:

That distinction matters to Facebook, which protects satire while demoting (but not deleting) “false news” in the News Feed. Facebook reiterated to The Verge that “we do offer satire on Facebook, as long as it’s not violating one of our community standards policies,” like hate speech. The call is left to Facebook’s fact-checkers, who can add written context or a “satire” label if a post is sufficiently confusing — one might have been added to CRTV’s post if they hadn’t added a disclaimer. (We don’t know whether CRTV was contacted by Facebook about the post, although we’ve reached out for clarification.)

But Facebook has acknowledged that “satire” can also be a bad-faith cover for serious misinformation attempts, and the distinction basically boils down to a poster’s intentions, which are irrelevant for people who are simply scrolling down the News Feed. Infowarsfounder Alex Jones has called himself a performance artist playing a character, and it’s not a leap to imagine Infowars or others making “satirical” conspiracy videos attacking school shooting survivors and claiming Facebook can’t censure them.

Not good y’all!

Talk to me

Questions? Comments? Mueller rants? casey@theverge.com

The Interface /

An evening newsletter about Facebook, social networks, and democracy.

Subscribe