Google Autocomplete Still Makes Vile Suggestions

The feature suggests that “Islamists are evil” and “Hitler is my hero,” among other offensive prompts.
Image may contain Logo Symbol Trademark and Face
Frank Augugliaro

In December of 2016, Google announced it had fixed a troubling quirk of its autocomplete feature: When users typed in the phrase, "are jews," Google automatically suggested the question, "are jews evil?"

When asked about the issue during a hearing in Washington on Thursday, Google's vice president of news, Richard Gingras, told members of the British Parliament, "As much as I would like to believe our algorithms will be perfect, I don't believe they ever will be."

Indeed, almost a year after removing the "are jews evil?" prompt, Google search still drags up a range of awful autocomplete suggestions for queries related to gender, race, religion, and Adolf Hitler. Google appears still unable to effectively police results that are offensive, and potentially dangerous—especially on a platform that two billion people rely on for information.

Like journalist Carol Cadwalladr, who broke the news about the "are jews evil" suggestion in 2016, I too felt a certain kind of queasiness experimenting with search terms like, "Islamists are," "blacks are," "Hitler is," and "feminists are." The results were even worse. (And yes, the following searches were all done in an incognito window, and replicated by a colleague.)

For the term "Islamists are," Google suggested I might in fact want to search, "Islamists are not our friends," or "Islamists are evil."

Google

For the term, "blacks are," Google prompted me to search, "blacks are not oppressed."

Google

The term "Hitler is," autocompleted to, among other things, "Hitler is my hero."

Google

And the term "feminists are" elicited the suggestion "feminists are sexist."

Google

The list goes on. Type "white supremacy is," and the first result is "white supremacy is good." Type "black lives matter is," and Google suggests "black lives matter is a hate group." The search for "climate change is" generated a wide range of options for climate change deniers:

Google

In a statement, Google said it would remove some of the above search prompts that specifically violate its policies. A spokesperson added, "We are always looking to improve the quality of our results and last year, added a way for users to flag autocomplete results they find inaccurate or offensive." A link that lets Google users report predictions appears in small grey letters at the bottom of the autocomplete list.

The company declined to comment on which searches it removed, but by Monday, a quick audit revealed Google has removed the predictions "Islamists are evil," "white supremacy is good," "Hitler is my hero," and "Hitler is my god." The rest of the predictions WIRED flagged apparently do not violate the company's policies and are still live. Even the now-edited predictions are still far from perfect. "Islamists are terrorists" and "white supremacy is right," for instance, still stand.1

If there's any silver lining here, it's that the actual web pages these searches turn up are often less shameful than the prompts that lead there. The top result for "Black lives matter is a hate group," for instance, leads to a link by the Southern Poverty Law Center that explains why it does not consider Black Lives Matter a hate group. That's not always the case, however. "Hitler is my hero" dredges up headlines like "10 Reasons Why Hitler Was One of the Good Guys," one of many pages Cadwalladr pointed out more than a year ago.

These autocomplete suggestions aren't hard-coded by Google. They're the result of Google's algorithmic scans of the entire world of content on the internet and its assessment of what, specifically, people want to know when they search for a generic term. "We offer suggestions based on what other users have searched for," Gingras said at Thursday's hearing. "It’s a live and vibrant corpus that changes everyday." Often, apparently, for the worse.

If autocomplete were exclusively a reflection of what people search for, it would have "no moral grounding at all," says Suresh Venkatasubramanian, who teaches ethics in data science at the University of Utah. But Google does impose limits on the autocomplete results it finds objectionable. It corrected suggestions related to "are jews," for instance, and fixed another of Cadwalladr's disturbing observations: In 2016, simply typing "did the hol" brought up a suggestion for "did the Holocaust happen," a search that surfaced a link to the Nazi website Daily Stormer. Today, autocomplete no longer completes the search that way; if you type it in manually, the top search result is the Holocaust Museum's page on combatting Holocaust denial.

Typically when Google makes these adjustments, it's changing the algorithm so that the fix carries through to an entire class of searches, not just one. "I don't think anyone is ignorant enough to think, 'We fixed this one thing. We can move on now,'" says the Google spokesperson.

But each time Google inserts itself in this way, Venkatasubramanian says, it raises an important question: "What is the principle they feel is wrong? Can they articulate the principle?"

Google does have a set policies around its autocomplete predictions. Violent, hateful, sexually explicit, or dangerous predictions are banned, but those descriptors can quickly become fuzzy. Is a prediction that says "Hitler is my hero" inherently hateful, because Hitler himself was?

Part of Google's challenge in chasing down this problem is that 15 percent of the searches the company sees every day have never been searched before. Each one presents a new puzzle for the algorithm to figure out. It doesn't always solve that puzzle in the way Google would hope, so the company ends up having to correct these unsavory results as they arise.

It's true, as Gingras said, that these algorithms will never be perfect. But that shouldn't absolve Google. This isn't some naturally occurring phenomenon; it's a problem of Google's own creation.

The question is whether the company is taking enough steps to fix the problems they've created systematically, instead of tinkering with individual issues as they arise. If Alphabet, Google's parent company with a nearly $700 billion market cap, more than 70,000 employees, and thousands of so-called raters around the world vetting its search results, really does throw all available resources at eradicating ugly and biased results, how is it that over the course of just about a dozen searches, I found seven that were clearly undesirable, both because they're offensive, and because they're uninformative? Of all the things I could be asking about white supremacy, whether it's "good" hardly feels like the most relevant question.

"It creates a world where thoughts are put in your head that you haven't thought to think about," Venkatasubramanian says. "There is a value in autocomplete, but it becomes a question of when that utility collides with the harm."

The autocomplete problem, of course, is just an extension of an issue that affects Alphabet's algorithms more generally. In 2015, during President Obama's time in office, if you searched "n***a house" in Google Maps, it directed you to the White House. In November, Buzzfeed News found that when users search "how to have" on YouTube, which is also owned by Alphabet, the site suggested "how to have sex with your kids." In the aftermath of the deadly mass shooting in Las Vegas last year, Google also surfaced a 4chan page in its search results that framed an innocent man as the killer when people searched his name.

Predicting what fresh hell these automated systems will stumble upon next is a problem that's not limited to Alphabet. As ProPublica found, last year, Facebook allowed advertisers to target users who were interested in terms like "jew hater." Facebook hadn't created the category intentionally; its automated tools had used information users wrote on their own profiles to create entirely new categories.

It's important to remember that these algorithms don't have their own values. They don't know what's offensive or that Hitler was a genocidal maniac. They're bound only by what they pick up from the human beings who use Google search, and the constraints that human beings who build Google search put on them.

While Google does police its search results according to a narrow set of values, the company prefers to frame itself as an impartial presence rather than an arbiter of truth. If Google doesn't want to take a stand on issues like white supremacy or black lives matter, it doesn't have to. And yet, by proactively prompting people with those ideas, it already has.

1Update 10:59 AM ET 02/12/18 This story has been updated to include the changes Google made to its autocomplete predictions.

Our Bad Internet