Skip to main content

Google and Microsoft warn investors that bad AI could harm their brand

Google and Microsoft warn investors that bad AI could harm their brand

/

As AI becomes more common, companies’ exposure to algorithmic blowback increases

Share this story

An image showing a graphic of a brain on a black background
Illustration by Alex Castro / The Verge

For companies like Google and Microsoft, artificial intelligence is a huge part of their future, offering ways to enhance existing products and create whole new revenue streams. But, as revealed by recent financial filings, both firms also acknowledge that AI — particularly biased AI that makes bad decisions — could potentially harm their brands and businesses.

These disclosures, spotted by Wired, were made in the companies’ 10-K forms. These are standardized documents that firms are legally required to file every year, giving investors a broad overview of their business and recent finances. In the segment titled “risk factors,” both Microsoft and Alphabet, Google’s parent company, brought up AI for the first time.

From Alphabet’s 10-K, filed last week:

“[N]ew products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.”

And from Microsoft’s 10-K, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm. Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

These disclosures are not, on the whole, hugely surprising. The idea of the “risk factors” segment is to keep investors informed, but also mitigate future lawsuits that might accuse management of hiding potential problems. Because of this they tend to be extremely broad in their remit, covering even the most obvious ways a business could go wrong. This might include problems like “someone made a better product than us and now we don’t have any customers,” and “we spent all our money so now don’t have any.”

But, as Wired’s Tom Simonite points out, it is a little odd that these companies are only noting AI as a potential factor now. After all, both have been developing AI products for years, from Google’s self-driving car initiative, which began in 2009, to Microsoft’s long dalliance with conversational platforms like Cortana. This technology provides ample opportunities for brand damage, and, in some cases, already has. Remember when Microsoft’s Tay chatbot went live on Twitter and started spouting racist nonsense in less than a day? Years later, it’s a still regularly cited as an example of AI gone wrong.

However, you could also argue that public awareness of artificial intelligence and its potential adverse affects has grown hugely over the past year. Scandals like Google’s secret work with the Pentagon under Project Maven, Amazon’s biased facial recognition software, and Facebook’s algorithmic incompetence with the Cambridge Analytica scandal have all brought the problems of badly implemented AI into the spotlight. (Interestingly, despite similar exposure, neither Amazon nor Facebook mention AI risk in their latest 10-Ks.)

And Microsoft and Google are doing more than many companies to keep abreast of this danger. Microsoft, for example, is arguing that facial recognition software needs to be regulated to guard against potential harms, while Google has started the slow business of engaging with policy makers and academics about AI governance. Giving investors a heads-up too only seems fair.