X

How Facebook, Twitter rely on you to stop spread of mosque shooting video

Social media users make it their personal mission to get videos of the New Zealand terrorist attack deleted.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
5 min read
New Zealand Grieves As Victims Of Christchurch Mosque Terror Attacks Are Identified

A woman holds her hat to her face as she pauses next to flowers laid near Al Noor mosque on March 17 in Christchurch, New Zealand. Fifty people are confirmed dead and 36 people were injured after shootings at two mosques on Friday. 

Getty Images

When Ruth Meeker heard that a gunman had opened fire inside a mosque in Christchurch, New Zealand, on Friday, she turned to social media site Twitter to learn more about the attack.

What she saw next horrified her. After clicking on a hashtag about the shooting, a gruesome video of the massacre that the alleged gunman livestreamed on Facebook started to play automatically.

Meeker reported the video to Twitter. But the Connecticut resident didn't stop there. For two days, she made it her mission to find and report more than 200 tweets, most of which included video of the shooting, which killed 50 people and has been condemned as a terrorist attack. The 17-minute video also popped up on Instagram, YouTube , Reddit and 8chan, a fringe message board.

Living far from New Zealand, Meeker said reporting the video was her way of taking action.

"Part of it was not wanting other people to see [the video] because it's horrific and those people deserve dignity in death," Meeker said. "I don't feel like it's right to see that."

Watch this: Facebook deletes 1.5M videos after shooting, Democrats push ahead on Net Neutrality

Twitter, Facebook and other social media sites have invested in artificial intelligence designed to flag and remove violent content. They've employed thousands of people, often contractors, who screen videos, photos and posts in an effort to catch the worst of them before they hit your feed. Facebook alone has 15,000 content reviewers throughout the world.

Facebook and Twitter declined to say how many user reports it received about the shooting video. Facebook said Saturday it removed 1.5 million videos of the attack in the first 24 hours and more than 1.2 million were blocked at upload. YouTube said Friday it removed thousands of videos related to the shooting in 24 hours, but didn't respond to question about how many user reports it received. Police in New Zealand didn't answer questions about how many instances of the video local authorities have reported to the companies.

Free-labor business model

What's clear is that plenty of offensive content slips through the tech companies' defenses, leaving lots for users, like Meeker, to report. Those users often make a personal mission of hunting down and reporting troubling content, like the New Zealand video. Videos of the mosque shooting, which police asked the public not to share, appeared almost as quickly as they were removed.

All of that is part of the companies' sophisticated strategy, some social media experts say, to get free labor while keeping people engaged.

"The platforms are encouraging users because Facebook's business model relies on their free labor," Jennifer Grygiel, assistant professor of communications at Syracuse University, who argues the companies could delay videos to protect younger users, wrote in an email. "It's also a PR strategy to make people feel that the company is working to address content issues."

Facebook, which bars terrorists and murderers from the platform, also prohibits users from promoting or publicizing violent crimes because of concerns about copycat behavior. The social network has also been removing edited versions of the video that don't show graphic content out of respect for the victims' families and concerns of law enforcement. Twitter has rules against glorifying violence, depicting the death of an identifiable person and sometimes requires users to remove posts with excessively graphic violence.

Not all social media users agree the platforms should be scrubbed of violent content. Some users say sharing the graphic video provides a valuable public service and taking it down only fuels more demand for it. Others argue that sharing the video would provoke conversations about tolerance and gun violence.

"I posted it myself so that everyone can witness how a real mass shooting looks like and why we should work towards curbing hate towards others and banning the use of guns across the world," Brell Devyn, a Texas Twitter user, said in a direct message.

Concerned that Twitter would suspend other accounts she used for business, Devyn said she deleted the tweet with the video after users accused her of helping the shooter, disrespecting the victims, or posting the video for likes and retweets.

And some social media users aren't sure how to handle such content, often uploading the shooting videos and then having a change of heart.

Usama Qureshi, the CEO of herbal medicine maker Hamdard Pakistan, posted a clip of the shooting video on Twitter but quickly deleted it.

In a message, Qureshi said he saw the video after it was shared with him in a group on Facebook-owned WhatsApp. Qureshi said he thought posting the video would help catch the suspect, but then changed his mind after realizing the violent content could have a harmful effect on viewers.

Social media users reporting the videoto get it deletedfaced what seemed to be a never-ending task. That's no surprise given the amount of traffic the sites get. Facebook, the world's largest social network, has more than 2 billion users who log in to the site every month, and Twitter has 321 million monthly users. YouTube has nearly 2 billion monthly logged-in users.

Michael Swaim, of Los Angeles, said he spent about three hours looking through trending topics on Twitter and reported about 20 instances of the Christchurch shooting video. He said he hoped to "starve the shooter of oxygen" and encouraged other Twitter users to report the video.

Swaim said he understands that removing all the shooting videos is challenging, but he thinks social media companies with their technology and resources could do a better job of pulling down the content.

"It's hard to believe that with all the money these companies make," Swaim said, "they can't get ahead of it in a more timely fashion."

Chasing down the videos can be emotionally exhausting for users, a complaint that's also made by the moderators social media companies hire. A lawsuit seeking class action status was filed on behalf of Facebook content moderators in California.

Meeker, the Twitter user in Connecticut, said she also encountered the video on Facebook-owned Instagram. By then, reporting the video had started to take a toll on her mental health.

"I was getting to the point," she said, "where this wasn't healthy for me."