Internet Deception Is Here to Stay—So What Do We Do Now?

Fake followers. Fake news. Foreign influence operations. The last decade revealed that much of what's online is not as it seems.
Image may contain Plant Fruit Food and Banana
PHOTOGRAPH: PATRICK LLEWELYN-DAVIES/GETTY IMAGES, PATTERN: VIOLET REED

It was 2010 and techno-optimism was surging. A whopping 75 percent of American adults were online—a big jump from the 46 percent that were logging on a decade prior—cruising through the information age largely from the comfort of their own homes for the first time en masse. Social media was relatively new and gaining traction—especially among young people—as the world’s attention appeared to shift to apps from the browser-based web.

The Pew Research Center marked the new decade by asking 895 leading technologists, researchers, and critics for predictions of what the internet-connected world of 2020 would look like. On one subject, there was an overwhelming consensus: 85 percent of respondents agreed that the “social benefits of internet use will far outweigh the negatives over the next decade,” noting that the internet by and large “improves social relations and will continue to do so through 2020.” They pointed to the ease of communication and wealth of knowledge granted by the information age as reasons to be optimistic about the future.

What could possibly go wrong?

A lot, as it turns out. An early sign of the coming infopocalypse came in the form of A Gay Girl in Damascus. The blog chronicled the life of its author, Amina Arraf, a 35-year-old gay Syrian woman participating in an uprising against President Bashar al-Assad. It quickly found a global audience, who became enraptured with Arraf’s moving prose and vivid description of queer life in the Middle East. The Guardian described her as “an unlikely hero of revolt in a conservative country.”

Until June 6, 2011, when a different kind of post appeared on the blog. It was a panicked update from Arraf’s cousin explaining that she had been thrown into the back of red minivan by three mysterious men in downtown Damascus. News of the kidnapping quickly spread around the globe, resulting in reports from The Guardian, The New York Times, Fox News, CNN, and more. A “Free Amina” campaign led to the creation of posters and other websites. The State Department even reportedly started an investigation into her disappearance.

Six days after the so-called kidnapping, the truth emerged: The gay girl from Damascus was a straight 40-year-old American man from Georgia named Tom.

The blog, social media accounts, and nearly six years of forum postings under the name Amina Arraf were all fake. The hoax rocked the blogosphere and marked a turning point in public awareness of digital deception. The Washington Post said it illustrated the “ease of fudging authenticity online.”

The internet has always been awash with deception, dating to the web’s earliest days. A 1998 paper by Judith Donath, a researcher and adviser at Harvard’s Berkman Klein Center, detailed the effects of trolling, misinformation, and disinformation on Usenet groups. The troubles sound familiar:

A troll can disrupt the discussion on a newsgroup, disseminate bad advice, and damage the feeling of trust in the newsgroup community. Furthermore, in a group that has become sensitized to trolling—where the rate of deception is high—many honestly naïve questions may be quickly rejected as trollings … Compared to the physical world, it is relatively easy to pass as someone else online since there are relatively few identity cues ... Even more surprising is how successful such crude imitations can be.

Even as the web blossomed in the following decade, and more people gained access, these concerns largely stayed below the surface. But the last decade has made the extent—and the consequences—of online falsehoods all the more clear.

Flaws emerged in the web’s key measuring sticks—likes, clicks, follower counts, views, and so on. In July 2012 a startup made headlines by reporting that only one in every five clicks on its Facebook ads appeared to come from humans. The rest, the company alleged, were from bots. The assertion seems almost quaint now. But at the time, it was viewed as “an explosive claim that could give pause to brands trying to figure out if advertising works on Facebook.”

It marked a new era of doubt online. The following month, in August 2012—on a Friday before a holiday weekend, in typical tech company fashion—Facebook said it had identified and removed fake Likes used by a number of pages to make them seem more popular than they were.

“Facebook says the crackdown ‘will be a positive change for anyone using Facebook.’ But that’s not true,” Ryan Tate wrote for WIRED at the time. “Fraudsters are clearly using Facebook, too, hence all the fake ‘likes.’ And they’ll be racing to thwart Facebook’s filters. Summer ends this weekend with a victory for Facebook’s ‘like’ engineers. But the arms race has just begun.”

In 2013, YouTube faced its own uncomfortable reality. The volume of fake traffic from bots pretending to be real viewers rivaled the traffic from actual humans. Some employees worried the imbalance could bring about what they called “the Inversion,” where YouTube’s manipulation detection systems would get confused and interpret fake views as real, and flag those made by humans as suspicious.

That scenario never came to pass, but the scourge of fake engagement plagues social media giants to this day. The practice has become so profitable and popular that entire sub-industries have formed to both produce fake likes, followers, and views, and catch those who purchase false engagement.

All this fakery was, at its core, about money. Soon, the stakes grew even larger. In late 2012 foreign information operations began to make headlines for their use of social media. Members of the Taliban masqueraded as attractive women on Facebook and friended Australian soldiers in hopes of gleaning military intel from their conversations. Details were sparse, but the implications were recognized as grave. As WIRED noted at the time: “These were only opening salvos in the social media wars. The next acts of digital espionage could inflict real damage.”

And they have. In Myanmar, disinformation shared on Facebook fueled chaos and confusion, leading to violence and riots. In the West, Russia’s Internet Research Agency wreaked havoc on 2016’s Brexit vote and US presidential election. US intelligence officials say similar efforts are practically a certainty next year.

In May 2014, The Washington Post launched a series called “What was fake on the internet this week," in response to what it described as “an epidemic of urban legends and internet pranks.” At the outset, the typical internet hoax du jour was a lighthearted, silly affair, false stories on subjects like pregnant tarantulas roaming the streets of Brooklyn or the makers of Oreo launching a fried chicken flavor.

By the end of 2015, the series was shelved—not because of a dearth of fake content online but because the pace and tenor of online disinformation had become much more difficult to stomach. The fakes were easier to spot, but garnered ever-more traffic. The subject matter had grown hateful, gory, and divisive. It was less funny and more upsetting. Reporter Caitlin Dewey explained the change in her sign-off column:

There’s a simple, economic explanation for this shift: If you’re a hoaxer, it’s more profitable. Since early 2014, a series of internet entrepreneurs have realized that not much drives traffic as effectively as stories that vindicate and/or inflame the biases of their readers. Where many once wrote celebrity death hoaxes or “satires,” they now run entire, successful websites that do nothing but troll convenient minorities or exploit gross stereotypes … There’s Now8News, which runs outrageous crime stories next to the stolen mugshots of poor, often black, people; or World News Daily Report, which delights in inventing items about foreigners, often Muslims, having sex with or killing animals.

Peddling polarizing content and disinformation only grew easier and more lucrative as the decade progressed. There was an audience, and the powerful targeting tools offered by Facebook and other tech giants made reaching it just a few clicks away. A 2016 BuzzFeed News investigation found that in the final months of the US presidential campaign, viral fake news stories on Facebook gained more shares, reactions, and comments than top articles by The New York Times, The Washington Post, and other major news outlets. Nearly all of the top-performing fake election stories had either an overtly pro-Trump or anti-Clinton bent.

Little by little, the effects of this online fakery seeped into the real world. A legion of automated Twitter accounts helped Pizzagate—the toxic conspiracy theory that culminated in a gun-wielding adherent opening fire in a DC pizzeria in 2016—amass traction by seeming to have more real-world supporters than it actually did. Russia’s IRA infamously paid US citizens to build a cage atop a flatbed truck and dress up like Hillary Clinton in prison during a Florida rally; the group also paid protesters at rallies it organized in New York and Pennsylvania, which were advertised on Facebook.

The list goes on and on: The term “fake news” somehow became fake news; the White House tweeted a doctored video from InfoWars to support an inaccurate narrative; news of a migrant caravan traveling through Mexico was weaponized to spread disinformation; a video of Nancy Pelosi that had been edited to make her appear drunk received millions of views on Facebook; deepfakes were released upon the world. Politicians are now free to spread false information on Facebook so long as they purchase an ad.

It’s enough to make your head spin. The future of truth online seems so bleak that the experts are experiencing an existential crisis. Worst of all, there don’t appear to be any solutions. The spread of disinformation and polarizing content stems from an array of hard-to-pin-down factors, and many of the most common approaches to the problem only tackle one element, rather than the whole.

In a Pew survey shortly after the 2016 election, 14 percent of US adults reported sharing a political news story online that they knew at the time was made up. “In these cases, fact-checking isn’t going to do a single thing to correct the falsehoods,” Whitney Phillips, a Syracuse professor whose research focuses on information pollution, wrote recently for the Columbia Journalism Review. “Facts have, quite literally, nothing to do with it.”

Fakery isn’t going away. A deception-free internet is a nostalgia-steeped illusion; falsehood has been a part of the digital world practically since its inception. A better question might be the likely scale of online fakery a decade from now. At a certain point, salacious falsehoods will no longer be as profitable and some media may lose their claim as viable sources of information. But it’s difficult to say whether that will be enough to stem the tide of disinformation. If anything, the past decade has proven the folly of attempting to predict the future.


More Great WIRED Stories