Bad Actors Are Using Social Media Exactly As Designed

Opinion: When ISIS uses Twitter to recruit or a landlord uses Airbnb to discriminate, that’s not exploiting the platforms' glitches—that’s using their features.
Image may contain Logo Symbol and Trademark
When ISIS uses Twitter to recruit or a landlord uses Airbnb to discriminate, that’s not exploiting the platforms' glitches—that’s using their features.HOTLITTLEPOTATO

When Justice Department special counsel Robert Mueller announced criminal charges against Russian operatives for interfering with the 2016 presidential election, descriptions of how the Russians used modern communications technologies were all too familiar. Journalists referred to the ways in which Russia “manipulated social-media platforms,” and tech company executives like Facebook’s Rob Goldman decried “how the Russians abused our system.”

This is standard fare. When Russia manipulates elections via Facebook, or ISIS recruits followers on Twitter, or racist landlords deny rentals to blacks and then offer them to whites through Airbnb, commentators and companies describe these activities as “manipulation” or “abuse” of today’s ubiquitous websites and apps. The impulse is to portray this odious behavior as a strange, unpredictable, and peripheral contortion of the platforms.

But it’s not. It’s simply using those platforms as designed.

Twitter’s mission statement speaks of sharing ideas and demolishing barriers: “To give everyone the power to create and share ideas and information instantly, without barriers.”

It’s no surprise, then, that ISIS was drawn to Twitter’s ability to share news about demolishing a different type of barrier. When the terrorist group startled the world in 2014 by sweeping through much of Syria and then pushing into Iraq, its key moment occurred on Twitter, as ISIS tweeted photographs of a bulldozer demolishing the earthen barrier that had long marked the border between Syria and Iraq.

Twitter later said that ISIS’s use “is not permitted on our service,” and that may be true as a matter of policy—but not as a matter of functionality. As ISIS used Twitter to break down barriers and share its own horrific ideas instantly and anonymously, ISIS wasn’t manipulating how Twitter works. It was using it precisely as designed: to share ideas rapidly and globally.

“Belong anywhere” is Airbnb’s motto. But it turns out there are some who don’t think that just anyone deserves to belong anywhere. A 2016 study revealed that would-be renters with white-sounding names booked successfully on Airbnb 50 percent of the time, compared to 42 percent for would-be renters with black-sounding names.

In response, Airbnb commissioned a report that concluded that “fighting discrimination is fundamental to the company’s mission.” But what’s actually fundamental to the company’s mission is fighting virtually any form of regulation. That’s what maximizes Airbnb’s profits; it’s also what gives the platform essentially a free pass from decades of legal and regulatory infrastructure carefully crafted to fight housing discrimination.

For racist landlords to have unfettered discretion to pick and choose renters based on any criteria whatsoever—even skin color as it appears in profile photos—isn’t an exploitation of Airbnb’s features. It’s just use of those particular features—which Airbnb has subsequently altered in some ways but generally has chosen to maintain.

And that brings us back to what Mueller’s charges reveal about how Russia used Facebook, among other platforms, to interfere with the 2016 election and sow discord among Americans. As Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism, told the New York Times, “Facebook built incredibly effective tools which let Russia profile citizens here in the U.S. and figure out how to manipulate us. Facebook, essentially, gave them everything they needed.”

For example, the type of polarizing ads that Facebook admits Russia’s Internet Research Agency purchased get rewarded by Facebook’s undisclosed algorithm for provoking user engagement. And Facebook aggressively markets the microtargeting that Russia utilized to pit Americans against each other on divisive social and political issues. Russia didn’t abuse Facebook—it simply used Facebook.

Recognizing that these challenges—and others—emerging on modern communications platforms stem from their inherent features isn’t an indictment of the companies whose services we’ve all come to rely on; to the contrary, it shows just how hard these problems are. And it calls for a reorientation as to how the companies and the rest of us think about addressing these challenges.

First, if companies would show the world how their algorithms operate—and, moreover, how malicious actors are using their platforms—that enhanced transparency could yield crowd-sourced solutions rather than leaving remedies to a tiny set of engineers, lawyers, and policy officials employed by the companies themselves.

Second, tech companies should at least experiment with bolder approaches to restricting malicious actors’ access to their services. So far, companies’ policies proscribe use by ISIS and certain other malicious actors, but in practice everyone can use the companies’ services unless and until another user complains about certain behavior and the company investigates and validates the complaint.

That default could potentially be flipped for a narrow category of really bad actors. In an era of machine learning, activity that readily appears to be malicious—mimicking closely, for example, the ways in which Russian trolls or ISIS has behaved in the past—could be halted automatically, at least initially. Then, humans could review expeditiously that “hold” to determine whether any accounts were improperly halted and, where appropriate, promptly reverse the suspension.

That would represent a huge shift in how companies approach use of their platforms; but, at least as an experiment, it would take seriously the increasing demand for spotting and halting bad actors before they can post radicalizing content or ensure that their socially divisive messages go viral.

The standard line that these challenges represent peripheral exploitations of these platforms yields hope that they can be addressed by strictly technical, engineering solutions, such as Facebook’s recent announcement that it would recalibrate the algorithm driving its News Feed. As the author of a report commissioned by Airbnb about discrimination on the platform phrases it, “Just as teams of lawyers were assembled to fight discrimination in the mid-20th century, it is my hope that 21st century engineers will do their part to help eliminate bias.” That might suffice if these were truly marginal manipulations of today’s technologies.

But they’re not. Because these are core features of the technologies being used by a few bad actors for some bad ends, these challenges ultimately aren’t susceptible to technical solutions alone. Addressing them ultimately demands “turning off”—or making unavailable—core product features for users abhorrent enough not to deserve access to them. Figuring out which users fall into that category is a value judgment—the type of value judgment that the libertarian ethos of tech companies has left them very reluctant to make.

The engineers will have their role to play in the twenty-first century, but so will the lawyers—and the policy wonks, and the ethicists, and perhaps even the moral philosophers. Because, ultimately, these problems stem not from the platforms’ glitches but from their very features. But, through enhanced transparency and a willingness to experiment with how to address malicious actors’ access to their services, the companies can enlist the rest of us to offer informed help and feedback, too.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.

More WIRED Business
  • Inside Facebook’s hellish last two years

  • Facebook doesn't know how many users followed Russians on Instagram

  • Airbnb gets serious about fighting discrimination