Skip to main content

Why Facebook doesn’t follow the First Amendment

Why Facebook doesn’t follow the First Amendment

/

In leaked audio, Mark Zuckerberg says a different approach to content moderation is needed

Share this story

Illustration by Alex Castro / Th

Programming note: Zoe and I are both on assignment this week, and The Interface will be off Thursday while we work on some special reports. The silver lining is that Monday’s issue will be very long!

Yesterday here we talked about whether politicians should be able to lie in their Facebook ads. I argued that they should be: Facebook ads are public and searchable, and if a politician or political party is out there telling lies, that seems like an important and useful thing for a democracy to know about. Facebook is big and its CEO is unaccountable to any electorate, and so I would rather the company not referee political speech.

Many readers see things differently, though, so I wanted to air out a few of your takes.

The most common response I got was a kind of cake-and-eat-it-too argument in which citizens should push for (1) Facebook to be broken up but (2) referee political speech until that happens. Here’s one reader take:

In reference to Facebook, you said “To worry about Facebook’s vast size and influence — and I do! — while also demanding that it referee political speech seems like an odd contradiction.” 

I don’t think it’s a contradiction at all. I think regardless of the size of a company, it should strive to eliminate or at least label misinformation including and perhaps especially in political ads. Those two issues are able to live side by side easily, in my opinion. 

I think that’s basically right, though it doesn’t really address my larger concern, which is the giant unaccountable corporation refereeing what politicians can say.

Another common response was that the whole thing just seems a little too convenient — Facebook gets to wash its hands of fact-checking on some of the toughest questions it will face, and reap all the profits? Here’s another reader:

With the company publishing advertising content and then having it examined by third-party fact-checkers, the process might be more democratic and fair than if it were done by either Facebook or the state, but it also means that there is always a possibility that false advertising has severe implications as it is broadcast across its platform, even if it is debunked later on.

In this sense, it means Facebook is practically reaping the benefits of such lax policies with regards to advertising (attracting a wide range of clients and the money from publishing advertisement) and also avoiding the responsibilities and costs associated with actually taking decisions proactively.

Another reader put it more concisely:

If it’s too difficult to make sure political ads are not full of lies, they shouldn’t accept political ads. Kind of like a supermarket not selling food that they aren’t sure won’t give you food poisoning.

These criticisms strike me as basically fair, and it’s worth recalling that Facebook once considered banning political advertising as not worth the trouble. (It generates less than 5 percent of the company’s revenue, according to Reuters.)

But none of this really engages with my larger frustration here, which is that people seem to be holding Facebook responsible for politicians’ lies when we could be holding the politicians responsible instead. I get the fear that we live in a post-truth world where people just believe whatever their party’s Facebook ad tells them to believe, but it also seems defeatist and more than a little patronizing.

As it so happens, Mark Zuckerberg discussed how the company moderates political speech in the leaked audio obtained by The Verge. In this section, which has not previously been published, an employee has asked whether Facebook ought to model its content policies strictly after the First Amendment. (A senator recently proposed making this the law of the land.) Zuckerberg says no, that most people want the company to go much further than the First Amendment. In the rest of his answer, Zuckerberg describes the difficulty of making decisions about what is misinformation when it comes to a subject like immigration in Europe, and suggests he is resigned to facing criticism here no matter what he does.

He’s talking about moderation generally, not Facebook’s decision to avoid making these calls on political ads. But his thinking here adds some color to why he would make that decision:

Mark Zuckerberg: Overall, I don’t really think that people don’t want us to moderate content. There’s like 20 categories of harmful content that we focus on. They’re all different. Everything ranging from terrorist propaganda to bullying to incitement of violence to gory content to pornography. ... 18 out of the 20 categories are not that controversial. There’s some controversy and each one on the edges of exactly how you set the policies. But broadly speaking, [they] are not the thing that people are focused on.

There are two categories that are very sensitive politically, and they are hate speech and misinformation.

And the issue on this that we’ve run into on hate speech ... a lot of people think that we need to be more aggressive in moderating content that is offensive or basically would make certain groups of people feel unsafe. And then there are other groups on the other side of these debates who feel like they’re engaging in legitimate political discourse.

It’s always hard to talk about this in the context of your own political environment. So I find it a little easier to depressurize this, and think about some of the European debates that are going on around migration, and some of the challenges of integrating large numbers of people who have come into these different countries fleeing Syria and other places. The debate that goes on is that well, some of the stuff ends up being overly generalized and feeling hateful, some of the people on the other side [say] “well, I’m trying to discuss the real issues around ... trying to integrate lots of people into a society at once.” Like, we need to be able to have these debates. Where’s the line?

That’s really hard, and we’re kind of right in the middle of that. I don’t think anyone says that we shouldn’t, that we should [follow the] First Amendment. But that’s a really tricky balance.

The other one on misinformation, I think is really tricky. Because on the one hand, I think everyone would basically agree that you don’t want the content that’s getting the most distribution to be flagrant hoaxes that are tricking people. But the other side of the debate on this is that a lot of people express their life and their experiences by telling stories, and sometimes the stories are true and sometimes they’re not. And people use satire and they use fiction ... and the question is, how do you differentiate and draw the line between satire or a fictional story? Where is the line?

It’s not that it’s 100 percent difficult, but there are new nuances in doing this. A lot of people feel like in a world where a lot of the people who are arbitrating what is misinformation and doing fact-checking tend to be left of center, that that is getting in the way of an ability to express something that they feel is real and that matches their lived experience. So you want to do both, right? You want to make sure that you give people a voice to express their lived experience in a civil way, and you want to make sure that the stuff that’s going viral is not ... blatant, flagrant hoaxes that are going to be harmful.

So those two are by far the most fraught. But overall ... I haven’t had anyone come to us and say, “please allow terrorist propaganda on your service.” Even the people who are putting forth the bills in Congress for a debate saying that they want more openness on the platform. So I don’t think it’s gonna go in that direction. I just think the reality is we’re kind of stuck in this nuanced area, and will continue to get it coming from a lot of different sides as we try to navigate this as well as possible. 

The Ratio

Today in news that could affect public perception of the tech platforms.

🔼 Trending up: Google will now require manufacturers of Android devices to incorporate its digital wellbeing features, including parental controls and screen time monitoring.

🔽 Trending down: Like Facebook before it, Twitter was caught using phone numbers given for two-factor authentication purposes to target ads at people.

🔽 Trending down: Google contractors in London are threatening a strike over unpaid bonuses, job cuts, and bad working conditions.

Governing

⭐ The protests in Hong Kong continue to have ripple effects around the world, as companies with business interests in China struggle to walk the line between allowing employees and customers freedom of expression without mortally offending the Chinese government.

Today, Marco Rubio called on lawmakers to open an investigation into ByteDance’s TikTok, citing evidence that the Chinese company is censoring content in America. Tony Romm and Drew Harwell at The Washington Post have the story:

In a series of tweets, Rubio added that he has asked the Trump administration to “fully enforce anti-boycott laws” that prohibit any person or “U.S. subsidiaries of Chinese companies” from “complying with foreign boycotts seeking to coerce U.S. companies to conform with #China’s government views.”

Rubio’s tweets echo waves of criticism aimed at US and Chinese tech companies for suppressing content that is supportive of pro-democracy protesters in Hong Kong. TikTok has gotten a fair amount of this scrutiny due to its popularity and murky content moderation policies:

TikTok’s lack of content related to the Hong Kong protests, which Chinese leaders have pushed to undermine, has raised fears that the platform is censoring ideas the government wants to suppress. In response, TikTok’s Beijing-based parent company told The Washington Post last month that the app’s U.S. platform was not influenced by the Chinese government, and that the lack of protest footage could be related to users’ view of the app as a place for entertainment, not politics. It declined to share any additional details about its content-moderation practices.

On the flip side, Apple is taking heat from Chinese state media for allowing an app that tracks Hong Kong police onto the App Store. After initially blocking the app, HKmap.live, Apple allowed it onto the App Store last week. It uses crowdsourcing to alert protesters to the location of law enforcement. Apple, which depends on China for revenue and manufacturing more than perhaps any other tech giant, is doing the right thing here — and it might cost them. (Verna Yu / The Guardian)

Then again, Apple banned the Quartz news app from its Chinese app store. Quartz has been closely covering the Hong Kong protests.

Activision Blizzard suspended a player of its game Hearthstone who expressed support for Hong Kong protestors. The move came after Ng Wai Chung, known as Blitzchung, dressed in a gas mask and goggles and used a pro-democracy protest slogan during a post-match interview. He’s now banned from competing for a year. Some employees walked out of their offices Wednesday. Elsewhere, Fortnite maker Epic Games used the moment to reassure players it wouldn’t ban them for political speech. (Gregor Stuart Hunter and Zheping Huang / Bloomberg)

Mark Zuckerberg is going to testify before the House Financial Services Committee about Libra on October 23rd. This is the first time a Facebook executive has testified before congress since David Marcus spoke to lawmakers about the company’s planned cryptocurrency in July. (Akela Lacy / The Intercept)

The news comes just as lawmakers are putting pressure on Visa, Mastercard and Stripe to reconsider their involvement in the Association. In a letter to the company CEOs, Sens. Brian Schatz (D-HI) and Sherrod Brown (D-OH) warned about the project’s many risks, including facilitating criminal and terrorist financing and destabilizing the global financial system. (Russell Brandom / The Verge)

To top it all off, the Libra Association’s head of product, Simon Morris, quietly left the group in August for undisclosed reasons. I assume the reason wasn’t “it’s going really well and I simply have nothing left to do around here.” (Alex Heath / The Information)

Joe Biden asked Facebook to reject ads from the Trump campaign containing misleading information about his family’s corrupt business dealings with Ukraine. Facebook said no. (Lauren Feiner / CNBC)

A Senate intelligence committee released a report on Russia’s 2016 election meddling, calling out tech companies like Google and YouTube for helping spread misinformation. Previous reports focused mostly on Twitter and Facebook. (Georgia Wells, Robert McMillan and Dustin Volz / The Wall Street Journal)

The Foreign Intelligence Surveillance Court ruled that an FBI program targeting foreign suspects violated the rights of American citizens by collecting their personal data long with the data of foreign targets. The program ran from 2017 to 2018 and involved gathering email addresses and phone numbers. (Zachary Evans / National Review)

Industry

An anti-Semitic shooting in Germany was live-streamed on Twitch. The incident could renew pressure on tech companies to catch these crimes as they happen and do more to remove replays from their servers. Makena Kelly:

Today’s attack echoed the March mass shooting of Muslims in Christchurch, New Zealand — which was streamed on Facebook Live. In today’s roughly 35-minute video, a man is seen shooting two people and attempting unsuccessfully to break into the synagogue. He also gives a brief speech into the camera, railing against Jews and denying that the Holocaust happened. Two people have been confirmed dead in today’s attack, and German law enforcement has raised the possibility that multiple attackers were involved. Only one perpetrator appears in this video.

It’s unclear how many people watched the initial stream or how many copies may have been archived at Twitch — which is owned by Amazon — or on other sites. Extremism researcher Megan Squire reported that the video was also spread through the encrypted platform Telegram, with clips being viewed by around 15,600 accounts. The Christchurch shooting was viewed live by only a few people, but reuploaded roughly 1.5 million times after the attack — so dealing with the aftermath will be a real concern.

Americans have a patchy understanding of digital security, according to a new survey by Pew. Just 28 percent can identify an example of two-factor authentication — one of the most important ways to protect online accounts. And nearly half weren’t sure what private browsing is. (Emily A. Vogels and Monica Anderson / Pew)

Instagram turned Throwback Thursday into an official feature. It’s called “On This Day,” and allows users to share a random photo they posted on the same calendar date in the past. The launch is part of the app’s new “Create” mode, which lets users play around with interactive stickers, drawings and text without needing to take a photo first. (Josh Constine / TechCrunch)

YouTube launched a new tool that lets politicians book ad space months in advance. The tool could be valuable for politicians looking to capitalize on YouTube’s targeted ad capabilities before voting begins in Iowa and New Hampshire in February. (Emily Glazer and Patience Haggin / The Wall Street Journal)

YouTube narrowly passed Netflix as the #1 video streaming platform for teens, according to a study from investment firm Piper Jaffray. Netflix still beat out Hulu and Amazon by a comfortable margin. (Annie Palmer / CNBC)

Microsoft’s Airband initiative, which launched in 2017 to improve rural internet access across the US, is now expanding to Latin America and Sub-Saharan Africa. The goal is to get 40 million more people connected to the internet by July 2022. (Jon Porter / The Verge)

And finally...

Coleen Rooney Accused Someone Using Rebekah Vardy’s Instagram Account Of Selling Fake Stories About Her To The Tabloids And It’s So Dramatic

Generally speaking I try to stay out of disputes between the wives and girlfriends (WAGs!) of British footballers. Even when one of them is maybe secretly funneling stories about another one to the tabloids. But then Coleen Rooney revealed her devilishly clever method of uncovering her betrayer. She spent five months posting fake stories to her Instagram account, limiting the audience for those stories to a single person — fellow WAG Rebekah Vardy.

Normally this is where I would quote the story, but my favorite element of this drama isn’t captured in this piece: Coleen has now been feted on Twitter with her very own hashtag: #WagathaChristie.

Talk to us

Send us tips, comments, questions, and even more misleading political ads: casey@theverge.com and zoe@theverge.com.