Skip to main content

Twitter users are protesting Alex Jones with a viral block list

Twitter users are protesting Alex Jones with a viral block list

/

More user-powered innovation on Twitter

Share this story

Twitter

Last week, we talked about why Facebook banned Alex Jones — and Twitter didn’t. Facebook saw that Jones, who had already violated any number of the platform’s rules, had no intention of reforming himself. Twitter said first that Jones had not broken any rules; and then — after a CNN’s Oliver Darcy showed the company a series of offending tweets — that he had, but not enough to get banned.

Late on Tuesday, Twitter took another half-step toward banning Jones — suspending him for a week, after posted a video on Twitter in which he encouraged his followers to get their “battle rifles” in anticipation of all-out war with his enemies.

In the mind of Jack Dorsey, Twitter’s co-founder and CEO, this suspension represented an opportunity for Jones to reflect on his bad behavior. “I feel any suspension, whether it be a permanent or a temporary one, makes someone think about their actions and their behaviors,” Dorsey told NBC News’ Lester Holt, in one of two interviews he did on Wednesday.

In the spirit of thinking about their actions and behaviors, Jones’ crew more or less immediately posted the battle-rifles video to the separate Infowars account. That earned the Infowars account a weeklong suspension of its own. Twitter being Twitter, the offending video remained viewable on Twitter-owned Periscope for nearly a day afterward. (Elsewhere in Twitter being Twitter, the Jones account continued to tweet for some time after his suspension, because it turns out that if you schedule tweets to post before you get suspended those tweets will continue to post just fine.)

Subscribe to The Interface /

Get our daily newsletter about social networks and democracy the night before it’s posted on The Verge

Subscribe

After introducing this round of half measures, Dorsey sat down with the Washington Post’s Tony Romm and Elizabeth Dwoskin to announce that he was “rethinking the core of how Twitter works.”

“The most important thing that we can do is we look at the incentives that we’re building into our product,” Dorsey said. “Because they do express a point of view of what we want people to do — and I don’t think they are correct anymore.”

A now common criticism of Twitter holds that the viral mechanics through which tweets spread encourage the polarization of the audience into warring tribes. (See this Ezra Klein piece from last week.) That’s one way to explain why malicious users like Jones are able to thrive on social networks: their bombastic speech attracts a wave of initial attention, and platform algorithms help them find a much larger audience than they ever would otherwise. It’s in this sense that “incentives built into the product,” as Dorsey calls them, bear reconsideration.

Dorsey has more ideas. Labeling automated bots to distinguish them from accounts run by real people, for example. Or this one, cribbed from YouTube:

One solution Twitter is exploring is to surround false tweets with factual context, Dorsey said. Earlier this week, a tweet from an account that parodied Peter Strzok, an FBI agent fired for his anti-Trump text messages, called the president a “madman” and has garnered more than 56,000 retweets. More context about a tweet, including “tweets that call it out as obviously fake,” could help people “make judgments for themselves,” Dorsey said.

This is all fine, so far as it goes. Along with other tech leaders, Dorsey is expected to testify next month at a Senate hearing about information campaigns in politics. It makes sense that the CEO of Twitter would seek to convey a sense of urgency around solving the problems that have bedeviled the platform for many years now.

And yet at the same time, Twitter has never lacked for ideas. Ask anyone who ever worked there: any feature suggestion you could offer had already been debated ad nauseam. The problem always came down to the details, to the implementation, to how you were going to ship the damned thing.

That’s why I can view Dorsey’s vague promises on Wednesday only through the prism of the Alex Jones saga. Twitter was the very last of its peers to take any action against the Infowars host, and even when it did decide to punish him, it did so in the most lenient possible terms.

It offered Jones a loophole that let him keep tweeting. It left the offending video up for many hours. And it promised Jones that he could return — and in just a week, too. Twitter knew it had to punish Jones for his behavior. The trouble, as always for this company, was in the details.

But as the company dithers, its users are organizing. This week, Grab Your Wallet founder Shannon Coulter had a viral Twitter threadsuggesting a concrete action Twitter users could take to protest Jones’ ongoing presence on the platform. Coulter organized a list containing the Twitter handles of the Fortune 500, then made them available as a collective block list. Protesters could install the block list with a couple of clicks, and once they have done so, any ads from those companies would not appear in their Twitter timelines.

As of yesterday, more than 50,000 people had installed her tool. Users have previously gifted Twitter the hashtag, the @ mention, and the retweet; Coulter may have just given us the viral block list. And while Twitter talks endlessly about what it might do someday, a growing faction in its user base is taking action right now.

Myanmar

In March, the United Nations said Facebook is used to incite violence against the Rohingya, a Muslim minority group. Ever since, regular reports have explored how Facebook failed to hire native-language speakers who could have identified hate speech on the platform as it began to spread, and ignored warnings from local groups and regional experts that the situation was getting out of hand.

Reuters’ Steve Stecklow has delivered the most comprehensive account yet of Facebook’s misadventure in Myanmar. His piece reveals the existence of Operation Honey Badger, a content moderation shop focused on Asia that is run by Accenture on Facebook’s behalf. Despite the efforts of its 60 or so moderators, Reuters easily found 1,000 pieces of anti-Rohingya hate speech on Facebook.

In part, that’s because Facebook’s vaunted artificial intelligence systems are failing.

In Burmese, the post says: “Kill all the kalars that you see in Myanmar; none of them should be left alive.”

Facebook’s translation into English: “I shouldn’t have a rainbow in Myanmar.”

So what happens next? Vice’s David Gilbert reports that Facebook is conducting a human rights audit “to assess its role in enabling ethnic violence and hate speech against its Rohingya Muslim minority.”

The audit, which Facebook confirmed, will be conducted by the San Francisco firm Business for Social Responsibility. Gilbert says the report could be finished by the end of this month. The company is also hiring for a variety of policy roles specific to Myanmar, a first for Facebook.

These are important steps, and while it’s unclear what action they might result in, they convey the appropriate degree of seriousness. Facebook — and the wider world — have a lot riding on whether the company gets it right. Activists have described similarly violent outbreaks of hate speechincluding Vietnam, India, Cambodia, and Sri Lanka. The conflict in Myanmar is bloody, but it is by no means unique. 

Democracy

How social media took us from Tahrir Square to Donald Trump

Zeynep Tufecki offers a concise history of how optimism around social media as a tool for peaceful protest faded into existential worries. Worth reading in full:

First, the weakening of old-style information gatekeepers (such as media, NGOs, and government and academic institutions), while empowering the underdogs, has also, in another way, deeply disempowered underdogs. Dissidents can more easily circumvent censorship, but the public sphere they can now reach is often too noisy and confusing for them to have an impact. Those hoping to make positive social change have to convince people both that something in the world needs changing and there is a constructive, reasonable way to change it. Authoritarians and extremists, on the other hand, often merely have to muddy the waters and weaken trust in general so that everyone is too fractured and paralyzed to act. The old gatekeepers blocked some truth and dissent, but they blocked many forms of misinformation too.

How a Fake Group on Facebook Created Real Protests

Sheera Frankel reports on a now-deleted Facebook page called Black Elevation, which organized rallies, posted videos, and spoke out about racism. In fact, it was part of the influence operation that Facebook revealed last month:

The Black Elevation organizers may have been trying to slide into the real world by hiring event coordinators or trying to persuade real activists to identify themselves as members of Black Elevation.

Mr. Nimmo said all of the pages Facebook recently removed were aimed at left-wing activists in the United States. It is possible, he added, that a similar influence campaign has been focusing on right-wing activists.

Americans don’t think the platforms are doing enough to fight fake news

Daniel Funke reports on a new survey published by Gallup and the Knight Foundation.

The report, based on web surveys from a random sample of 1,203 U.S. adults, found that 85 percent of Americans don’t think the platforms are doing enough to stop the spread of fake news. Additionally, 88 percent want tech companies to be transparent about how they surface content, while 79 percent think those companies should be regulated like other media organizations — a common trope among journalists.

That’s despite the fact that the majority of people surveyed (54 percent) said social media platforms help keep them informed and that they’re concerned about those companies making editorial judgments.

Transgender Girl, 12, Is Violently Threatened After Facebook Post by Classmate’s Parent

An Oklahoma school shut down after a Facebook group led to violent threats against a transgender student, Christina Caron reports:

A 12-year-old transgender student in a small Oklahoma town near the Texas border was targeted in an inflammatory social media post by the parents of a classmate, leading to violent threats and driving officials to close the school for two days.

It all started on Facebook. Jamie Crenshaw, whose children attend public schools in the town, Achille, complained in a private Facebook group for students’ parents that the transgender girl, Maddie, was using a bathroom for girls.

Elsewhere

WhatsApp Co-Founder’s ‘Rest and Vest’ Reward From Facebook: $450 Million

Jan Koum has the best job in the world and it’s not even close. Bless Deepa Seetharaman and Kirsten Grind for this:

After WhatsApp co-founder Jan Koum announced he was leaving Facebook Inc. FB -0.87%in late April, he has continued showing up at least monthly at the social-media giant’s headquarters in Menlo Park, Calif. His incentive for making the appearances: about $450 million in stock awards, according to people familiar with the matter.

Mr. Koum’s unusual arrangement with Facebook is one of the more lucrative examples of a Silicon Valley practice sometimes called “rest and vest,” in which the holders of stock grants are allowed to stick around until they qualify to collect a sizable portion of their shares.

Meet The People Who Spend Their Free Time Removing Fake Accounts From Facebook

Craig Silverman introduces us to some heroes of the social realm. (Incidentally, they do not seem particularly impressed with Facebook’s efforts on this front. “It seems like every time we tell them something, they had no idea or didn’t know that was possible,” Denny said. “You can’t tell me that you don’t know some of this. I mean, this is your business, right? This is stuff me and Kathy are doing in our spare time because we are committed to it at this point. But every time Kathy tells them something, it’s like a revelation.”)

Kathy Kostrub-Waters and Bryan Denny estimate they’ve spent more than 5,000 hours over the past two years monitoring Facebook to track down and report scammers who steal photos from members of the US military, create fake accounts using their identities, and swindle unsuspecting people out of money.

During that time they reported roughly 2,000 fake military accounts, submitted three quarterly reports summarizing their findings to Facebook, and even met with Federal Trade Commission, Pentagon, and Facebook employees to talk about their work.

Google-Facebook Dominance Hurts Ad Tech Firms, Speeding Consolidation

The Google-Facebook advertising duopoly has led to consolidation in the ad tech industry, Claire Ballentine reports.

Instagram users are reporting the same bizarre hack

There’s some sort of ongoing Russian attack on individual Instagram accounts, Karissa Bell reports:

Megan and Krista’s experiences are not isolated cases. They are two of hundreds of Instagram users who have reported similar attacks since the beginning of the month. On Twitter, there have been more than 100 of these types of anecdotal reports in the last 24 hours alone. According to data from analytics platform Talkwalker, there have been more than 5,000 tweets from 899 accounts mentioning Instagram hacks just in the last seven days. Many of these users have been desperately tweeting at Instagram’s Twitter account for help. 

Amazon Has YouTube Envy

Amazon-owned Twitch is ramping up competition with YouTube, Lucas Shaw reports:

Amazon in recent months has been pursuing exclusive livestreaming deals with dozens of popular media companies and personalities, many with large followings on YouTube. Twitch is offering minimum guarantees of as much as a few million dollars a year, as well as a share of future advertising sales and subscription revenue, according to several people who’ve been contacted by Twitch.

Launches

People Raise $300M Through Birthday Fundraisers in First Year

Birthday fundraisers in the News Feed are more than just an engagement hack — they’ve also raised $300 million for charity in a year, Facebook said today. The company also announced some user interface upgrades along with the announcement that show you more information about the nonprofits you’re donating to.

Takes

Twitter’s Misguided Quest to Become a Forum for Everything

John Herrman says Twitter’s notion of a universal public square is hopeless:

On Twitter, it may seem that you are talking to friends or peers, and that the space is controlled or even safe. But it’s not: It’s shared with and extremely vulnerable to those with a desire to disrupt or terrorize it. In order to function, Twitter must make its users feel at home in the most public space devised by humankind. The platform can’t easily say what smaller intentional forums can: “We don’t want this here; you’re violating the spirit of our community; go away.” It is too big, with too many people present for too many different reasons, to be a site for any one sort of conversation. It exercises absolute authority over its service, of course, but must pretend to do so carefully, sparingly and only when forced to.

And here’s a former (I think?) Twitter employee Jared Gaut has a thread worth reading on why he’s taking a break from the service in the wake of Alex Jones-related inaction:

And finally ...

Jerry Seinfeld Says Jokes Are Not Real Life

Dan Amira asks Jerry Seinfeld why he doesn’t tell jokes on Twitter:

I don’t hear the laugh. Why waste my time? It’s a horrible performing interface. I can’t think of a worse one. I always think about people that write books. What a horrible feeling it must be to have poured your soul into a book over a number of years and somebody comes up to you and goes, “I loved your book,” and they walk away, and you have no idea what worked and what didn’t. That to me is hell. That’s my definition of hell.

Welcome to hell, Jerry!

Talk to me

Send me tips, questions, comments, human rights audits: casey@theverge.com.