BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why Won't Facebook Talk About How Often Its Algorithms Are Wrong?

Following
This article is more than 5 years old.

Two weeks ago Facebook released yet another glossy marketing infographic site and video touting how its state of the art technology, top engineers and teams of experts have made massive strides in conquering yet another scourge of the online world through the power of advanced algorithms. This past week its EMEA counterterrorism lead announced that its algorithms were now deleting 99% of all ISIS and al-Qaida terrorism content across the site. As with all of Facebook’s announcements to date, neither of these proclamations made any mention of how often the algorithms that increasingly control its platform are wrong and whether they are actually right more often than they are wrong. After initially promising to provide a response, the company once again declined to comment on the false positive rates of its algorithms or why despite repeated requests it continues to refuse to release those numbers. Why is the company so afraid to talk about whether its algorithms are actually accurate?

Like many online companies, Facebook is increasingly relying on automated machine learning algorithms to operate its platform, from deciding what content its users see to determining what constitutes acceptable speech to actively deleting content those algorithms believe to be in violation of its rules. The problem is that while the company prominently touts its rapidly expanding deployment of those algorithms and frames them as a massive success story, it has at the same time steadfastly refused to comment on whether the algorithms are actually accurate.

With its most recent misinformation announcement, when asked about its false positive rate, the company offered unrelated statements like "we measure the views of links that make claims debunked by fact-checkers. This number is decreasing" and that “we started an initiative to better understand how people decide whether information is accurate based on the news sources they depend on” but when asked in the past for even the most rudimentary details on how it's using that data to train its algorithms and how it has accounted for demographic bias, the company declined to comment.

At the same time, senior company officials give conflicting statements about the success of their algorithms. This past week Facebook’s  Counterterrorism Policy Lead for EMEA, Dr. Erin Marie Saltman, spoke at the International Homeland Security Forum in Jerusalem where she offered that “99% of terrorist content from ISIS and al-Qaida we take down ourselves, without a single user flagging it to us.” Her comments echo those of Facebook’s Head of Communications and Policy Elliot Schrage who offered the same success story European officials this past January.

These statements, in turn, spread through the press, including the New York Times’ coverage of Facebook’s AI efforts, contributing to the public perception that Facebook’s machine learning algorithms are winning the war against misuse of social media. Yet, when asked earlier this year for more detail on Schrage’s figure, a company spokesperson confirmed that the statement was incorrect and that the 99% referred to the percent of flagged terroristic content that had been deleted by an algorithm prior to a user seeing it – a vastly different and far less impressive statistic.

When asked whether Dr. Saltman had misspoken as well or whether the company had made any algorithmic advances in the six months since Schrage’s speech, the company did not respond to a request for comment.

In her remarks, Dr. Saltman at least acknowledged the danger of using a machine-centric approach to deleting terrorism content: “this is where terrorist content is unlike other abusive content like child exploitation imagery. We see that pieces of terrorist content and imagery are used by legitimate voices as well; activists and civil society voices who share the content to condemn it, mainstream media using imagery to discuss it within news segments; so we need specialized operations teams with local language knowledge to understand the nuance of how some of this content is shared.” Going further, she added that while machine approaches can help speed the filtering process, “human review and operations is also always needed.”

This emphasizes the importance of understanding the accuracy of Facebook’s algorithms and just how often it miscategorizes these legitimate voices and deletes their posts without warning. How often do news outlets find their coverage of terrorism restricted or deleted by Facebook and how many activists have had their posts deleted or accounts suspended for condemning terrorism?

In a world in which Facebook increasingly controls the flow of online information, these are not merely idle questions - they have life and death consequences for how we deal with terrorism recruitment and public awareness.

Unfortunately, however, we simply have no idea – Facebook refuses to offer even the most basic of statistics of how often it gets things wrong.

In light of Dr. Saltman’s comments on the importance of having humans in the loop when deleting terrorism content, I asked the company whether all terrorism deletion decisions made by its algorithms are reviewed by a human for accuracy. Again, the company did not respond.

Putting this all together, Facebook has effectively bet the company on an AI-first future and publicly touts its algorithms as massive success stories, yet steadfastly refuses to offer even the slightest detail into the accuracy of those algorithms and whether they are right more than they are wrong. For algorithms that shape the flow of information from and to more than a quarter of the earth’s population and growing, that’s an awful lot of blind trust to place in one company and for a company that relentlessly pours forth a deluge of statistics and numbers regarding every aspect of its operations, it is concerning indeed that it has yet to utter a single word about whether the AI future it has bet the company on actually works.