YouTube Is Banning Extremist Videos. Will It Work?

YouTube’s hate speech policy now targets videos promoting the superiority of one group over another, or discrimination based on gender, race, or sexual orientation.
Protester holds flag
SETH HERALD/Getty Images

YouTube is changing its hate speech policy to more effectively police extremist content, a move targeting the legions of neo-Nazis, conspiracy theorists, white supremacists, and other bigots that long have used the platform to spread their toxic ideologies. The move announced Wednesday follows years of criticism that YouTube had allowed the site to become a haven for hatemongers and media manipulators.

The new community guidelines ban videos that promote the superiority of one group over another based on a person’s age, gender, race, caste, religion, sexual orientation, or veteran status, the company announced on Wednesday. YouTube specified that the ban would also apply to all videos that espouse or glorify Nazi ideology, which the company called “inherently discriminatory.”

This move came hours after YouTube said it would not remove videos by Steven Crowder, a high-profile far-right creator who used slurs in videos attacking a Cuban-American journalist for Vox over his ethnicity and sexual orientation. YouTube said that while it found the language used in Crowder’s videos “clearly hurtful,” his videos didn’t violate YouTube’s policies on hate speech. Later Wednesday, YouTube said it would no longer allow Crowder to run ads next to his videos; less than an hour after that, YouTube said it would allow Crowder to again run ads if he removed a link to a shirt with an offensive slogan from his channel.

X content

This content can also be viewed on the site it originates from.

X content

This content can also be viewed on the site it originates from.

X content

This content can also be viewed on the site it originates from.

In its announcement, YouTube also said it will remove videos that promote conspiracies about whether mass shootings and other “well-documented violent events,” such as the massacre at Sandy Hook Elementary or the Holocaust, took place. Shortly after the shooting at Marjory Stoneman Douglas High School in Parkland, Florida, in February 2018, a false video claiming the events were staged and that survivor David Hogg was a crisis actor became the top trending video on YouTube.

It’s difficult to assess how effective YouTube’s policies will be, as the company didn’t specify how it plans to identify the offending videos, enforce the new rules, or punish offenders. As the Crowder incident highlighted, YouTube has been inconsistent in enforcing its existing community guidelines.

“The devil is in the enforcement—well known white supremacists and hateful content creators remain on the platform even after this policy announcement,” said Henry Fernandez, senior fellow at the Center for American Progress and member of Change the Terms, a coalition of civil rights groups, in a statement. “In order to end hateful activities on their platform, we urge YouTube to also develop adequate means to monitor and enforce these new important terms.”

Rebecca Lewis, an online extremism researcher at Data & Society who has written extensively about YouTube, is skeptical. “It is extremely difficult not to see the new YouTube policies in part as a way to change a negative PR narrative after refusing to address the harassment faced by [the Vox journalist],” said Lewis on Twitter. “The platforms have become very good at issuing PR statements about proposed changes that don't ultimately have much effect.”

As of Wednesday afternoon, white nationalists James Allsup and Jared George, who runs a channel called "The Golden One," said YouTube had prevented ads from appearing near their videos, but not banned them. The YouTube channels of David Duke, Richard Spencer, Lauren Southern, and many other white supremacist figures remain on the site.

YouTube did not respond to multiple requests for comment.

The ban will reportedly affect a broad swath of some of the most popular conspiracy and bigoted content posted to the site, which has long been a source of controversy for YouTube. Videos claiming that Jews secretly control the world—which are common on the site, and make up the backbone of numerous virulent conspiracy theories such as QAnon—will be removed, a YouTube spokesperson told The New York Times. The same goes for those that claim women are intellectually inferior to men—a popular claim among misogyny-driven groups like the incel community or MGTOW—and videos that espouse white supremacy.

Many of the groups affected by YouTube’s announcement gained traction online in part from the platform’s recommendation algorithm, which critics say plunged users deeper into extremist rabbit holes by serving up an increasingly polarizing stream of fringe content. An analysis of more than 60 popular far-right YouTubers conducted by Lewis, the Data & Society researcher, last fall concluded that the platform was “built to incentivize” the growth of polarizing political influencers like those whose videos will likely be affected by this change.

“YouTube monetizes influence for everyone, regardless of how harmful their belief systems are,” Lewis wrote in the report. “The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online—and in many cases, to generate advertising revenue—as long as it does not explicitly include slurs. YouTube also profits directly from features like Super Chat”—a feature which allows users to pay to pin a comment to live streams—”which often incentivizes ‘shocking’ content.”

Notably, YouTube says its efforts to stem the spread of hate speech will go beyond increased moderation. YouTube says it will expand a system it tested in January limiting recommendations for what it calls “borderline content” which doesn’t violate its community guidelines, but has been determined to be harmful.

YouTube says it will also begin promoting and recommending “authoritative” content from trusted sources like news outlets and other experts to users that interact with potentially problematic content. “For example, if a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the ‘watch next’ panel,” YouTube said.

The company also noted that channels that repeatedly brush up against YouTube’s new hate speech policies won’t be able to run ads or use other monetization features like SuperChat.

Though the new rules are technically effective immediately, YouTube says that enforcement might be delayed as it adjusts its moderation efforts. The service said it will “be gradually expanding coverage over the next several months.”

“Context matters,” YouTube noted in a blog post on the announcement, “so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events.”


More Great WIRED Stories