The Toxic Potential of YouTube’s Feedback Loop

Opinion: I worked on AI for YouTube’s "recommended for you" feature. We underestimated how the algorithms could go terribly wrong.
Elena Lacey; Getty Images

From 2010 to 2011, I worked on YouTube’s artificial intelligence recommendation engine—the algorithm that directs what you see next based on your previous viewing habits and searches. One of my main tasks was to increase the amount of time people spent on YouTube. At the time, this pursuit seemed harmless. But nearly a decade later, I can see that our work had unintended—but not unpredictable—consequences. In some cases, the AI went terribly wrong.

Artificial intelligence controls a large part of how we consume information today. In YouTube’s case, users spend 700,000,000 hours each day watching videos recommended by the algorithm. Likewise, the recommendation engine for Facebook’s news feed drives around 950,000,000 hours of watch time per day.

In February, a YouTube user named Matt Watson found that the site’s recommendation algorithm was making it easier for pedophiles to connect and share child porn in the comments sections of certain videos. The discovery was horrifying for numerous reasons. Not only was YouTube monetizing these videos, its recommendation algorithm was actively pushing thousands of users toward suggestive videos of children.

When the news broke, Disney and Nestlé pulled their ads off the platform. YouTube removed thousands of videos and blocked commenting capabilities on many more.

Unfortunately, this wasn't the first scandal to strike YouTube in recent years. The platform has promoted terrorist content, foreign state-sponsored propaganda, extreme hatred, softcore zoophilia, inappropriate kids content, and innumerable conspiracy theories.

Having worked on recommendation engines, I could have predicted that the AI would deliberately promote the harmful videos behind each of these scandals. How? By looking at the engagement metrics.

Anatomy of an AI Disaster

Using recommendation algorithms, YouTube’s AI is designed to increase the time that people spend online. Those algorithms track and measure the previous viewing habits of the user—and users like them—to find and recommend other videos that they will engage with.

In the case of the pedophile scandal, YouTube's AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes—that is, the more data it has—the more efficient it will become at recommending specific user-targeted content.

Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it's also less likely to recommend such content to those who aren't. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.

But this incident is just a single example of a bigger issue.

How Hyper-Engaged Users Shape AI

Earlier this year, researchers at Google’s Deep Mind examined the impact of recommender systems, such as those used by YouTube and other platforms. They concluded that “feedback loops in recommendation systems can give rise to ‘echo chambers’ and ‘filter bubbles,’ which can narrow a user’s content exposure and ultimately shift their worldview.”

The model didn’t take into account how the recommendation system influences the kind of content that's created. In the real world, AI, content creators, and users heavily influence one another. Because AI aims to maximize engagement, hyper-engaged users are seen as “models to be reproduced.” AI algorithms will then favor the content of such users.

The feedback loop works like this: (1) People who spend more time on the platforms have a greater impact on recommendation systems. (2) The content they engage with will get more views/likes. (3) Content creators will notice and create more of it. (4) People will spend even more time on that content. That's why it’s important to know who a platform's hyper-engaged users are: They’re the ones we can examine in order to predict which direction the AI is tilting the world.

More generally, it’s important to examine the incentive structure underpinning the recommendation engine. The companies employing recommendation algorithms want users to engage with their platforms as much and as often as possible because it is in their business interests. It is sometimes in the interest of the user to stay on a platform as long as possible—when listening to music, for instance—but not always.

We know that misinformation, rumors, and salacious or divisive content drives significant engagement. Even if a user notices the deceptive nature of the content and flags it, that often happens only after they've engaged with it. By then, it's too late; they have given a positive signal to the algorithm. Now that this content has been favored in some way, it gets boosted, which causes creators to upload more of it. Driven by AI algorithms incentivized to reinforce traits that are positive for engagement, more of that content filters into the recommendation systems. Moreover, as soon as the AI learns how it engaged one person, it can reproduce the same mechanism on thousands of users.

Even the best AI of the world—the systems written by resource-rich companies like YouTube and Facebook—can actively promote upsetting, false, and useless content in the pursuit of engagement. Users need to understand the basis of AI and view recommendation engines with caution. But such awareness should not fall solely on users.

In the past year, companies have become increasingly proactive: Both Facebook and YouTube announced they would start to detect and demote harmful content.

But if we want to avoid a future filled with divisiveness and disinformation, there's much more work to be done. Users need to understand which AI algorithms are working for them, and which are working against them.


WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com


More Great WIRED Stories