Advertisement

SKIP ADVERTISEMENT

State of the Art

Tech Companies Like Facebook and Twitter Are Drawing Lines. It’ll Be Messy.

Credit...Doug Chayka

From its earliest days, Silicon Valley has been animated by near-absolutist understanding of free speech. Other than exceptions for fraud, pornography or specific threats, the prevailing view among many tech platforms has been to allow pretty much anyone to post pretty much anything. These sensibilities are even enshrined in American law, which gives companies broad immunity from prosecution for what their users post.

But now, for good reason, the absolutist ethos is over.

Over the past two years, pressed by lawmakers and the media about the harm caused by misinformation, state-sponsored propaganda and harassment, tech platforms have begun to radically overhaul their attitudes about what people can say online and how they can say it.

Last week, Facebook announced a new plan to remove misinformation that it determines might lead to imminent harm. And WhatsApp, Facebook’s messaging subsidiary, said it would limit how widely messages on the service can be forwarded as a way to slow down viral rumors, some of which have led to mob violence in places like India.

The new policies are part of a larger change. Online services — not just Facebook but also Google, Twitter, Reddit and even those far removed from news and politics, like Spotify — are rethinking their relationship with the offline world. They are asking themselves a basic question: Where does our responsibility begin and end?

This is a huge deal, and it’s well past time for the tech companies to take a firmer stand against lies and harassment. Still, as they wrestle with the question of responsibility and where to draw the line on certain kinds of content, we should all get ready for a very rough ride.

Here’s why: A mostly hands-off approach has been central to the tech platforms’ growth, allowing them to get to globe-spanning scale without bearing the social costs of their rise. But because they are now so influential — Facebook alone has more than two billion users — and so deeply embedded in our lives, a more hands-on approach to policing content will ripple around the world, altering politics, the media business and much else in society.

It could also have the opposite effect than what many critics want: better policing their own content could actually increase the power that tech platforms have to shape our lives.

I spent much of the last week talking to people who are working on these issues both inside and outside these companies. I came away from these conversations encouraged about their thoughtfulness, but there are few answers to the questions they face that will satisfy many people.

It’s great that tech giants are finally cognizant of their real-world effects. But there’s a lot of room for error in how they approach reviewing content. They are bound to get a lot wrong — either policing too much or too little — and may not be able to explain in any satisfactory way why they made certain decisions, arousing suspicion on all sides.

Last week, I took Facebook to task for the mind-numbing complexity of its emerging content policies, which speak broadly of the virtues of free expression but allow the company wide latitude to remove or reduce the distribution of certain posts for a variety of reasons. But it’s not just Facebook’s policies that are difficult to comprehend. Twitter’s rules elicit the same dizziness.

After talking to these companies and others, I got a sense of why their efforts to fix their issues are hard to understand. Tech platforms say they don’t want to be rash — they are all seeking input from many interested parties about how to develop content policies. They are also still exceedingly concerned about free expression, and still lean toward giving people the freedom to post what they would like.

Instead of banning speech, they often try to mitigate its negative effects by reaching for technical approaches, like containing the spread of certain messages by altering recommendation algorithms or imposing limits on their viral spread.

“There are nuanced policies for a reason,” said Monika Bickert, Facebook’s head of policy. “Some of this stuff is very complicated, and when we craft these policies, it is not a group of people sitting in Menlo Park, Calif., saying where we think the line ought to be.”

Ms. Bickert said she has set up regular meetings with a range of experts to hash out how Facebook should draw the lines on a host of specific kinds of speech. In general, the company removes content that is illegal, dangerous, fraudulent or otherwise spammy and inauthentic. But for areas that are less black-and-white, like misinformation, it takes a different approach.

“We reduce the distribution of information that is inaccurate, and we inform people with more context and perspective,” said Tessa Lyons, a product manager who heads Facebook’s effort to curb misinformation in the News Feed.

To do this, Facebook has partnered with dozens of fact-checking organizations around the world. It limits the spread of news that has been deemed false by showing those posts lower in users’ News Feeds, and it also displays more truthful articles as an alternative to ones that aren’t accurate.

Andrew McLaughlin, a former head of policy at Google who now runs an incubator that aims to build technology for progressive political movements, said he was impressed by Facebook’s efforts.

“I think I’m representative of a certain crowd of people who once took a really strong sense of pride in the sturdiness of our commitment to free speech on internet platforms,” he said. “But my views have certainly shifted in the caldron of experiences — and I am now glad that platforms like Facebook are really focusing resources and energy on malicious, manipulative propaganda.” (He previously consulted for Facebook, but is not currently working for the company.)

But I’m less sanguine, because there’s a lot we still don’t know about these policies and their effects.

One lingering question is political neutrality. Facebook has been targeted by conservatives who argue — without much evidence except for the fact that Silicon Valley is a liberal cocoon — that its efforts to police speech might be biased. In response, Facebook has invited Jon Kyl, a former Republican senator, to audit the company for bias against conservatives. Liberals, meanwhile, have argued that Facebook, in refusing to ban right-wing conspiracy factories like Alex Jones’s Infowars, is caving to the right.

I asked Ms. Bickert if Facebook takes potential political repercussions into account when deciding its policies. She told that me her team “seeks input from experts and organizations outside Facebook so we can better understand different perspectives and the impact of our policies on global communities.”

That’s gratifying, but it doesn’t get to the heart of the problem: Facebook is a for-profit corporation that, for both regulatory and brand-image reasons, wants to appear politically unbiased. But if it determines that some political actors — say, the alt-right in the United States, or authoritarian dictators elsewhere — are pumping out more false news than their opponents, can we count on it to take action?

The same suspicions apply to other platforms: Even though President Trump arguably violates Twitter’s content policies, he has been allowed to stay up. Imagine the outcry if he was blocked.

This gets to the larger issue of transparency. Ms. Bickert is messianic about openness — she points out that Facebook was the first large platform to publish its entire community standard rule book.

That’s salutary. But if Facebook’s written policies are clear, how it carries them out is less so. Little is known, for example, about the army of contract workers the company hires to review content that has been flagged — in other words, the people who actually make the decisions. (Facebook says they are extensively trained and their actions audited.) And because much of Facebook is personalized and many of its rules are enforced through slight tweaks in its ranking algorithm, the overall effect of its content policies may be very difficult for outsiders to determine.

That problem also plagues other platforms. Twitter, for instance, has a content filter that governs which tweets are displayed in parts of your feed and search results. But the filter’s priorities are necessarily secret, because if Twitter tells you what signals it looks for in ranking tweets, people will simply game it.

“People try to game every system there is,” David Gasca, a Twitter product manager, told me.

None of these problems are impossible to solve. Tech companies are spending huge sums to improve themselves, and over time they may well come up with innovative new ideas for policing content. For example, Facebook’s recent decision to release its data to a group of academic researchers may allow us to one day determine, empirically, what effects its content policies are really having on the world.

Still, in the end we will all be left with a paradox. Even if they’re working with outsiders to create these policies, the more these companies do to moderate what happens on their sites, the more important their own policies become to the global discourse.

A lot of people are worried that Mark Zuckerberg is already too powerful. The danger is that we ain’t seen nothing yet.

Email: farhad.manjoo@nytimes.com; Twitter: @fmanjoo.

A version of this article appears in print on  , Section B, Page 1 of the New York edition with the headline: Big Tech Is Drawing Lines. So Far, Very Fuzzy Ones.. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT