Troll Targets Say Twitter’s New Filters Don't Go Far Enough

Twitter's new tools seem to promise new ways to avoid abuse on the site. But it's still just an unchecked notification away.
twitterfilter.jpg
WIRED

Twitter has heard the complaints about abuse on its platform—really, it has. It’s heard them so loud and clear that it’s rolling out new ways to help you shut those voices out.

At least, that’s the message Twitter sought to send yesterday, when it announced two new tools that give users more ways to control what tweets they see. The first lets you limit who gets to buzz your phone with @-replies only to people you follow. The second is a so-called "quality filter," which sounds like a step toward finally curbing the mob-style harassment Twitter seems so optimized to enable. But some targets of that harassment aren't so sure it's a meaningful fix.

Twitter says flipping on the filter will "improve the quality of tweets you see by using a variety of signals, such as account origin and behavior." If that sounds vague, it’s on purpose—Twitter hesitates to share more for fear bad actors will try to game the system. So it's not clear whether the quality filter acts like an automatic spam filter or if a team of actual humans is manually curating a blacklist of abusive accounts—or both. But the setting, which has already been available to verified users, does seem to get rid of certain dubious content, such as duplicate tweets and bot spam. The filter specifically does not block tweets from people you follow or accounts with which you’ve recently interacted.

One broken aspect Twitter's new filters do not seem to address is the process by which users report abuse to the company and the all-too-opaque way the site reviews those complaints. And across the Twittersphere, that left users feeling ambivalent about Twitter's seeming nod toward fighting harassment.

What Lies Beneath

New York-based social worker and writer Feminista Jones uses an elaborate setup of filters and scripts to block the hate heaped on her as she blogs about black feminist issues. As a verified user, she already has the quality filter switched on, but she says it only helps if she's accessing Twitter via the web or the company's own mobile app. She says she uses other apps to track good mentions and keep up with her followers but winds up having to "catch a lot of the other crap" in the process. Jones says she uses a script to block Twitter "eggs"—accounts without profile pictures, typically set up in haste just to troll others. She also filters plenty of keywords from her timeline. Still, doing all that work isn’t always effective. "Many of the trolls know not to use certain words," she says.

Even setting aside trolls who find workarounds, filtering out abuse isn't really the same as eliminating it from the platform. "Hiding notifications from people I don’t follow narrows my ability to engage," says Jamilah Lemieux, senior editor at Ebony Magazine, of Twitter's new notification filter. For a public figure like herself, whose job often involves connecting with people she doesn't know, eliminating that contact altogether doesn't work.

Ariel Waldman, a blogger and erstwhile online community manager who has called Twitter out in the past for failing to police abuse on its platform, says the company's latest response isn't enough.

X content

This content can also be viewed on the site it originates from.

Yes, the new features solve some of the exhaustion of using Twitter every day, she says. But they still allow harassment, which can lurk just beneath the surface in an unchecked notification.

It's the persistence of those tweets that creates the ultimate existential dilemma for Twitter: does abuse truly cease to exist—or to matter—if its targets no longer see it? How Twitter chooses to answer that question ultimately goes to how it views itself: is it merely infrastructure, like a telephone line, or does it have some greater responsibility to police how people use its platform? "As much as Twitter seemingly likes to argue to the contrary, Twitter at the end of the day is a product, not a telecommunication protocol," Waldman says.

Yet it's understandable that the company has hesitated to target many accounts, especially those with a political bent. After all, sometimes important, powerful movements emerge largely thanks to an unfettered Twitter—think the Arab Spring or Black Lives Matter. Other times, however, seemingly political speech morphs with alarming speed into personal attacks. Still, does anyone want a tech company deciding where that threshold lies?

“The problem isn't Twitter. The problem isn't social media. The problem is the hateful nature of so many people who feel emboldened by anonymity and simply use these tools to spew their hate,” says Jones. “Until we find a way to stop being from being so hateful, just in general, this won't go away.” In the meantime, Twitter is trying to help, she says.

But for the regular targets of hate on Twitter, certain kinds of speech just aren’t up for debate. "By allowing your product’s users to be continuously harassed, abused and intimidated and only offering them filtering as a solution, you’re effectively upholding an abuser’s playground," Waldman says. "It's actually giving them the steering wheel to censor speech on the product." If abuse forces others into silence, that's not really any kind of free speech at all.