The Industry

Google Was Never Neutral

An interview on what President Trump gets right and wrong about Google’s biases.

Trump accused Google of suppressing conservative news outlets.
Photo illustration by Slate. Photos by Thinkstock.

On Tuesday morning, President Trump took to Twitter to accuse Google of suppressing conservative media outlets in its search results.

The basis for his allegations appears to have been an article from the conservative news site PJ Media, which alleged that a Google search for news on “Trump” was ranking “left-leaning outlets”—like CNN and the Washington Post—higher than “right-leaning sites.” By Tuesday afternoon, Trump had also warned Twitter and Facebook that they “better be careful” and are “treading on troubled territory” when it comes to their treatment of conservative views. Larry Kudlow, director of the National Economic Council, also told reporters that the administration is “taking a look at” possible regulations for Google to address its alleged bias. Google has denied that there is any political agenda in its search services.

In order to get a better sense for whether Trump’s accusations are founded, and if there is in fact any bias in Google’s search algorithms, Slate talked with Tim Hwang, who directs the Harvard-MIT Ethics and Governance of AI Initiative and was the former global public policy lead for machine learning at Google.

This interview has been condensed and lightly edited for clarity.

Slate: Is there any merit to the complaints in President Trump’s tweets?

Tim Hwang: Well, I think it’s really interesting. I was immediately struck by what has been a longstanding technology industry talking point. When these issues have come up in the past around “Is Google’s algorithm biased?” or “Is Facebook’s algorithm biased?” the tech companies have constantly said, “Well, we don’t want to be an arbiter of truth” or “We try as neutral of a platform as possible.” I was struck by the idea that whereas those arguments seem to work as late as only just a few years ago, they’re increasingly ringing hollow, not just on the side of the conservatives, but also on the liberal side of things as well. And so what I think we’re seeing here is really this view becoming mainstream that these platforms are in fact not neutral, and that they are not providing some objective truth. And of course the big challenge here is: “What would a better system look like?”

The president’s views are a little bit overblown, because the question is whether there is a conspiracy to promote certain liberal beliefs or whether the act of selecting content embeds with it certain types of biases. That is actually a really important distinction. The president’s tweet is picking up on a good point, but I don’t take as conspiratorial a view as he does on the questions of whether it’s biased or not biased. I don’t think the question is whether or not it’s biased. All these systems embed some kind of bias. The question is: Do we have transparency to how some of these decisions are being made?

What kind of biases do you see in Google’s algorithms?

Things like Google page rank are based very much on the degree of links between different web pages on the internet. Relevancy is based on whether or not someone has chosen to link to someone else. Does that mean there’s a bias towards things that are, say, more sensational? I think that’s totally a possibility. That’s one clear example of bias.

Another one: After responding to a lot of complaints about the quality of information coming through these platforms, a lot of these platforms—Facebook and Google included—have said that they want improve the quality of information. One way they’ve done that is to say, “OK, well here is a set of websites that we think are more credible than others.” I think the choice of that is inherently a subjective act: choosing which ones will be part of this set and which ones will be part of that set. In many cases, this is done in good faith. They’re asking, “How are we going to respond to public concern about this issue?” But no matter how you slice it, it imposes some point of view or some lens on the world of what is and is not credible.

Is there a good reason for the level of secrecy Google maintains around its search algorithms? Do you buy the argument that the secrecy is needed to prevent people from gaming the system?

That is the classic argument. In my opinion, I don’t think so. There are many ways of improving transparency that don’t necessarily require you to give up the exact code that would allow you to game the system. Now it’s a real concern. I don’t think they’re just fronting. But it doesn’t necessarily mean that it’s an impossibility to give the public more of an understanding of a way these systems work. I tend to be skeptical about those arguments.

But Google could reveal more than it has already?

For sure. Even the list of what they consider credible outlets is a really interesting and important thing for them to talk about and give more transparency. There are obvious reasons why they don’t: They feel like people would take it as conclusive evidence that they’re biased in some way. But I think we have to move away from a world in which these platforms feel like they can ever return to a place where the public views them as completely neutral, and instead opt for a greater demonstration of their process and workflow, recognizing that they can’t be fully objective and neutral in some perfect way.

Is there a reason why you might see conservative sites ranked lower in the Google search results?

This is what I mean by the conspiratorial view. There’s one point of view, which is: OK, why do we see those search results? Well, one of those reasons might just be because there’s a cabal of liberal illuminati at Google headquarters plotting to get this content to be lower on the algorithm. That’s one interpretation.

Another interpretation, which I think is a lot more believable, is that ranks are based on things like page views. It’s possible that a lot of the news outlets that are really getting a huge amount of distribution and traction online, based on people’s sharing behavior, based on people’s looking behavior, tend to be a set of larger media outlets. So there could be a difference in size that actually produces what appears to be a political bias in the results. Now, this is what I’m talking about with transparency: We don’t really know. I’m giving you two explanations for why this might be the case, and in the absence of greater transparency, what we get are more conspiratorial explanations for what’s going on.

This allegation of bias has become pretty common against Facebook and Twitter nowadays. Has Google in the past dealt with allegations of bias in how it ranks results, or is this new for the company?

It has, actually. The question of whether or not the Google algorithm is giving good results or fair results or socially-desirable results has been a long-standing question with the company. For example, the Google antitrust proceeding in Europe on search, at least, really focuses on this issue. A lot of people have sliced it in different ways. Some people say that the problem with Google is that it only gives you things it thinks you want to see, and so it generates filter bubbles. What we’re seeing now is a new kind of critique, which is that it’s promoting a certain political worldview. I think that is relatively more recent.

How do we address the issue of platform bias? Trump and others have been suggesting government regulation may be in order. Do you think that’s sensible?

I tend to be wary about where all this goes. There are a lot of reasons to disagree with Trump, and this may be another reason to disagree. Do we feel that the administration would do a responsible job articulating what should and should not be available on these platforms? The worry about saying that platforms shouldn’t be the arbiter of truth is to simultaneously say that government authorities should be the arbiter of truth. That’s not a situation we should be pushing towards. We’re left in this very interesting situation. If we don’t think the platforms are doing a good job, and we don’t want the government to act, then what do we do about this?

One of the things I’ve been thinking a lot about is taking a different approach to some of these questions. In a lot of cases, the question has been this binary: Do we leave world completely open, or do we then completely close it up?

In the 1930s, we actually had this debate when we talked about financial markets. Should we have a completely free market, or should we have state control? The thesis back in the day, right before the Great Depression, was: Don’t mess with the market because you’ll inevitably cause it to blow up, or you’ll make the Depression even worse. The middle ground that we came to in financial markets is what we call Keynesianism. There, the notion is that the market works, but sometimes it can get itself into these self-reinforcing cycles that are really negative. At that point, we actually do want the state to intervene.

That maybe is one way of thinking about what regulation should look like in the tech space. Basically, can we define what a crisis looks like in the marketplace of ideas? And then, under what circumstances would we want the government to intervene? It’s a lot more of a crisis-based approach, which is to say, “Under what circumstances do we want this intervention to occur” rather than saying, “The platforms can’t do it well, so we’re just going to have the government decide.” I really worry about these solutions that lock us into these permanent regulations of speech.