The Downside of Policing Violent Videos

Here’s why I’m not totally thrilled that Facebook and others will get better at quickly removing disturbing content.
Image may contain Art
Li-Anne Dias

Hi, folks, Steven here. Earlier this week, I cited Peter Thiel’s now iconic complaint: “We were promised flying cars, and instead what we got was 140 characters.” I used the quote to tee up an interview regarding flying cars — which now seem to be imminent, for better or worse — but didn’t examine the second part of the sentence, which implied that “140 characters” was a frivolous and trivial advance. Actually, the ability to tap out a short message from anywhere in the world that could instantly reach a potential audience of many millions is an astounding power. Certainly Mark Zuckerberg understands this, as he has guided Facebook more and more toward becoming a public platform. And by integrating Facebook Live into his service, Zuckerberg allows his two billion users to become broadcasters. But on Facebook Live and similar instant-broadcast online products, there is no seven-second delay that gives a full-time content cop a chance to block inappropriate video streams. Such a solution doesn’t scale, so the basic instrument is a multi-prong effort that relies on users reporting offending posts, algorithms further identifying them, and human monitors evaluating them. Nonetheless, when people post can’t-unsee images such as murders, beatings, rape, or suicide, horrific scenes are circulated for a time period that, whether measured in minutes or hours, is always too long.

Of course, this issue isn’t limited to live feeds — it applies to uploaded content as well, and even vicious text comments. Very bad behavior is the bane of all services that welcome contributions from unvetted communities. Facebook, with the biggest audience and a mission based on sharing, is most exposed, and Zuckerberg addressed the situation last week. “If we’re going to build a safe community,” he wrote in a post, “we need to respond quickly.” He promised to hire 3,000 new people working in the company’s community operations — a significant jump from the current 4,500.

Lately, there has been a spate of stories about the plight of those whose jobs require them to view a constant stream of nightmare images; as you might expect, they suffer painful after-effects, even years after leaving the job. Not ideal. Everyone seems to agree that the ultimate burden of policing content at scale will fall on artificial intelligence — powerful deep-learning networks that can effectively scan billions of videos and figure out which ones will give us nightmares, or require instant police actions. (Forget the Turing Test: We’ll know computers have consciousness when the first one sues for having PTSD.) Because it will be quite a while before AI programs can make the subtle distinctions between disturbing videos we need to see (unjust killings with social import) and those we don’t (parent kills kids), for now, humans will work in concert with those programs to try to minimize false positives.

It’s almost impossible to totally eliminate terrible content in a huge open network. But I’m pretty confident that the incredible pressure on Facebook and other companies will lead them to find the right combination of humans and AI to dramatically cut the time it takes to identify and block inappropriate posts, and even identify some potential suicides in time to alert authorities.

It seems churlish to worry about a downside to this effort. But the innovations required to identify and eliminate disturbing content on social networks might also be of great value to companies and institutions that impose censorship on their subjects.

I am reminded of a 2006 House Subcommittee on Human Rights hearing on technology companies adhering to Chinese rule when entering its market. Google had introduced its search engine, subject to Chinese censorship, and Congressman. Jim Leach asked its representative how it identified the page links it withheld from users. (Because the Chinese didn’t provide a set of forbidden sites, companies had to figure out for themselves which content would violate the censor’s standards.) The answer involved a clever scheme where Google actually fed keywords into existing search engines like Baidu, and saw which ones were blocked. It was the kind of out-of-the-box solution that Google hires engineers to invent. Leach was appalled. “So if this Congress wanted to learn how to censor,” he said, “we would go to you, the company that should symbolize the greatest freedom of information in the history of man?”

Of course, Facebook’s efforts aren’t meant to promote censorship: No one should be unexpectedly exposed to disturbing images. But as our best minds become devoted to identifying and removing specific content at scale, let’s also recognize that it’s more important than ever to make sure we elect and support leaders who won’t use those tools against us.

Here’s some of what we published on Backchannel this week:

Melinda Gates and Fei-Fei Li Want to Liberate AI from “Guys With Hoodies.” The potential to help censors is only one of a long list of unintended consequences of AI. We’d do a better job of avoiding them if the engineers involved were a diverse pool. Melinda Gates and celebrated computer scientist Fei-Fei Li recently sat down with our Jessi Hempel to call out AI’s diversity problem and suggest some remedies.

Sebastian Thrun Defends Flying Cars to Me Here’s that flying cars piece I mentioned earlier. I put Sebastian Thrun, CEO of the Larry Page-funded Kitty Hawk flying-car startup, on the grill as he sportingly handles my cranky questions about whether personal airborne transportation is just a sci-fi billionaire fetish.

Thousands of Veterans Want to Learn to Code — But Can’t You’d think an obvious solution to the scandalous unemployment rate for veterans would be a massive effort to teach them computer skills. But the GI Bill won’t pay for enrollment in the coding academies that have sprung up to fill a need for tech workers. Andrew Zaleski profiles a vet who is taking on this problem. Backchannel is proud to publish important, underreported stories like these, instead of churning out generic articles about earnings results.

This isn’t our only newsletter!

Check out our new one. It’s a Tuesday note that zeroes in on a “person of interest” that you absolutely must know about. Miranda Katz will also connect you to the must-reads, on Backchannel and elsewhere, that will make you more informed than the poor souls who have yet to receive this gem in their inbox. But you have to sign up to get it, even if you currently receive the one you’re reading now.