To Fix Its Toxic Ad Problem, Facebook Must Break Itself

Facebook stress-tests its tech. It could do the same for its moral compass.
To Fix Its Toxic Ad Problem Facebook Must Break Itself
Facebook/WIRED

It is a sure sign that Facebook’s algorithms have run amok when they allow anyone to target ads to people with an expressed interest in burning Jews. Likewise, when Russians can sow chaos in American elections by purchasing thousands of phony Facebook ads without Facebook realizing it, the automated systems selling those ads may need some oversight.

Two incidents in recent weeks have highlighted how Facebook’s advertising network---the cornerstone of its half-trillion-dollar valuation---is as susceptible to manipulation and bigotry as its news feed. Facebook addresses each problem as it arises, in isolation. But maybe it’s time for Facebook to acknowledge that it can’t solve these problems alone and to ask for help---before governments offer their own “help.”

In academia and other corners of tech, peer review is the norm. Cybersecurity companies hire outsiders to poke holes in their infrastructure and find vulnerabilities they may have missed. They don’t view that as sacrificing trade secrets or spilling their special sauce. If anything, they view this extra vetting as a competitive advantage.

Compare that to Facebook’s approach. “No outside researcher has ever had that kind of access,” says Antonio García Martínez, a former Facebook ads product manager. “That’s not the mode Facebook likes operating in.”

Instead, Facebook prefers operating as an island, free to make its own rules and fix its own flaws. It tends to fix known flaws piecemeal. After ProPublica found that Facebook created violently anti-Semitic ad categories based on the content of user profiles, the company removed those categories. Facebook also said it would stop using content in users’ profiles to target ads until it has “the right processes in place to help prevent this issue.” A week earlier, after Facebook discovered some 5,000 political ads linked to Russian actors, the company shut down the accounts that had bought the ads and vowed to explore “improvements to our systems for keeping inauthentic accounts and activity off our platform.”

Through all of these vows, Facebook has remained characteristically secretive. It has declined to share much about the Russia-linked ads, though the Wall Street Journal reported Friday that the company gave detailed records about the purchases to special counsel Robert Mueller. A Facebook spokesperson declined to comment on whether the company had taken any steps to root out bigoted ad categories before the ProPublica report. That leaves open two possibilities: either Facebook didn’t realize its technology could be used for that purpose, or it knew and did nothing about it.

Facebook may want to be an island, but it’s an island on which 2.2 billion people live, and it has proven time and again that it is either incapable of or disinterested in anticipating such vulnerabilities. There are ways Facebook could try to anticipate them. The company stress-tests its tech. It could do the same for its moral compass.

“You need to have a mindset that you want to break the system,” says Suresh Venkatasubramanian, a computer scientist at University of Utah, who specializes in algorithmic ethics and transparency. “You’re not attacking it to make the system fall apart. You attack it from the perspective of: What can we ask the system to do that if a human did it would seem racist or vile?”

Venkatasubramanian suggests Facebook set up a “red team” of researchers to investigate the unintended consequences of its technology. When it identifies and fixes a problem, it should reveal how it reached its conclusions, so other researchers can help the company identify potential blind spots.

That would require a massive cultural shift within the company. Facebook’s first line of defense when it comes to these issues, is almost always to identify them as fringe cases. “An extremely small number of people were targeted in these campaigns,” the company noted in its blog post about the anti-Semitic ad categories.

That’s often true. Martinez calls the ProPublica findings a “total red herring” and “a stupid lark from ProPublica. Of course you could find this,” Martinez says. “It doesn’t mean anybody actually was [targeted this way]. The reach was super small.”

But add up the small cases---$150,000 in a Russian influence campaign here, an ad targeted at a few thousand Nazis there---and it can feel like Facebook has planted a million landmines around the internet and left them to explode while it tends to greener pastures. Unless Facebook starts to think of these problems as endemic, it may never take adequate steps to mitigate them.

That's not to say it would be easy or that Facebook is unique among internet companies. BuzzFeed News found Friday that Google lets people advertise against terms like "black people ruin neighborhoods." During the 2016 election, money from Google advertising helped pay for a slew of fake news sites run by Macedonian teenagers. Many, if not all, automated ad platforms can be exploited for malicious purposes. That’s all the more reason why Google, Facebook, and others should share more information about how they're attacking the problem.

Otherwise, regulators may well force their hands. Democratic Senator Mark Warner and others want Facebook and other tech companies to testify about Russian interference in the 2016 election. He says the revelations about the Russian ads open “a whole new arena.”

Martinez thinks digital political ads should be regulated the way political ads on television are regulated, including disclaimers about who's paying for the ad. "Facebook needs to know its customer," Martinez says.

That wouldn't have stopped a Russian propaganda group from buying ads related to hot-button social issues that weren't specifically about the election. But Martinez says Facebook could develop the technology to flag politically charged ads for extra scrutiny the same way it flags ads related to alcohol. "If you tried advertising alcohol in Ohio to teens?" Martinez says. "Banned."

Facebook could set up similar systems for bigoted content in ads. Until now, it’s shown little interest. More rigorous vetting would mean taking more than 15 minutes to approve or reject an ad. It would also require more human beings. But Facebook didn't become a $497 billion social-networking giant by relying on humans. It did so by allowing machines to take over our world.