IBM Is Clueless About AI Risks

We may earn a commission from links on this page.

Earlier this week, David Kenny, IBM Senior Vice President for Watson and Cloud, told the US Congress that Americans have nothing to fear from artificial intelligence, and that the prospects of technological unemployment and the rise of an “AI overlord” are pernicious myths. The remarks were as self-serving as they were reckless, revealing the startling degree to which IBM is willing to forfeit the future for the sake of the present.

Congressman John Delaney (MD-6) recently launched the Artificial Intelligence (AI) Caucus for the 115th Congress, the purpose of which is to “inform policymakers of the technological, economic and social impacts of advances in AI and to ensure that rapid innovation in AI and related fields benefits Americans as fully as possible.” The caucus, which is being co-chaired by Congressman Pete Olson (TX-22), recently had tete-a-tetes with Amazon and Google. Now, it’s had an opportunity to hear what IBM—the tech firm responsible Watson, an overhyped cognitive computing that made a name for itself by defeating the world’s greatest Jeopardy champions—has to say. IBM’s David Kenny issued an open letter to Congress prior to a Congressional briefing that was held on June 28th.

Advertisement
Advertisement

The last AI winter, when interest and funding in AI shriveled up, is now long forgotten, and it has become fashionable once again to rave about the transformative potential of AI. A consequence of this hype, however, has been the rise of an AI backlash. People are starting to become concerned—and even a bit afraid—of what advanced AI might mean to them, and for good reason.

Advertisement

No doubt, we’re starting to feel the first inklings of an automation revolution, where steady advances in AI and robotics threaten virtually every human vocation. The prospect of mass technological unemployment has led to calls for a guaranteed basic income, and even a robot tax to slow automation and re-direct funds to other types of work. More ominously, there’s the potential for AI to run amok. Prominent thinkers like Elon Musk and Stephen Hawking have warned about the dangers of AI, saying it could eventually escape our understanding and control.

Pishposh, says Kenny. In his open letter to Congress, he argues such fears are overinflated, and that Americans ought to embrace AI with open arms.

Advertisement

“The impact of AI is evident in the debate about its societal implications—with some fearful prophets envisioning massive job loss, or even an eventual AI ‘overlord’ that controls humanity. I must disagree with these dystopian views,” wrote Kenny. “When you actually do the science of machine intelligence, and when you actually apply it in the real world of business and society—as we have done at IBM to create our pioneering cognitive computing system, Watson—you understand that this technology does not support the fear-mongering commonly associated with the AI debate today.”

Advertisement

The real disaster, said Kenny, would be in “abandoning or inhibiting cognitive technology before its full potential can be realized,” adding that, “We pay a significant price every day for not knowing what can be known: not knowing what’s wrong with a patient; not knowing where to find critical natural resources; or not knowing where the risks lie in our global economy.”

He said the fears of massive job losses are understandable, but historical precedent suggests that AI “will not replace humans in the workforce.” Instead, Kenny believes that AI will augment both humans and the jobs we’ll apparently still have in the future.

Advertisement

Kenny also said we should “abandon any notion of taxing automation,” and that we “cannot kid ourselves into thinking that a universal basic income will solve the challenge of certain tasks being automated.” He referred to these prescriptions as “short term cop-outs,” and recommended that American educators start to emphasize skills over degrees.

On risks, Kenny says we need to know how an AI system comes to one conclusion over another. “People have a right to ask how an intelligent system suggests certain decisions and not others, especially when the technology is being applied across industries such as healthcare, banking and cybersecurity,” he wrote. “And our industry has a responsibility to answer.”

Advertisement

To that end, Kenny said that companies must be able to explain what went into an algorithm’s decision-making process, and citizens must be made to understand how AI technologies work. And indeed, work is being done in this area, including efforts to create AI that can explain its actions in a way we can understand.

Without question, Kenny is right about the potential for AI to transform our society—and for the better. AI will help us invent new medicines, create safer roads, mitigate economic risks, help us manage our resources, and solve some of the world’s most vexing problems. We should absolutely be enthusiastic and supportive of research into AI, but we should also be exceptionally wary given its disruptive potential, whether that disruption leads to social turmoil and poverty, or a catastrophe that threatens the existence of our species.

Advertisement

“To me, that [open letter] reads more like a corporate press release. It doesn’t really engage with the longer term AI concerns that I’m working on,” said Skype co-founder Jaan Tallinn in an interview with Gizmodo. “[Kenny writes that], ‘Critical decisions require human judgement, morals and intuition—AI does not change that.’ If things go well, then that’s true. But it requires hard work to solve the AI control problem to make sure increasingly autonomous AI would stop and return control to humans when those critical decisions need to be made.”

Tallinn, the co-founder of The Center for the Study of Existential Risk at the University of Cambridge, says we’re already facing this very dilemma in the debates evolving around the emerging field of autonomous weapons—and the answers, he said, aren’t as simple and reassuring as Kenny’s letter makes it sound.

Advertisement

Stuart Russell, professor of computer science and Smith-Zadeh Professor in Engineering at the University of California, Berkeley, agrees that AI will introduce important benefits, but he’s concerned about the way IBM has chosen to whitewash the potential consequences—and even belittle those who are starting to sound the alarm.

Advertisement

“I think IBM has staked the company on AI—which may well be a good bet—and so they have fears about the government ‘inhibiting cognitive technology’ or ‘taxing automation’,” Russell told Gizmodo. IBM has every right to petition Congress and to express its opinion, he said, while also pointing out the company’s rather large lobbying and public relations budget.

“The crux of its argument is that IBM knows more about AI and about economics than the ‘fearful prophets’ and that any mention of risks is a dangerous, Luddite fallacy,” said Russell.

Advertisement

On the economics of employment risks, Russell pointed to several “fearful prophets,” including Nobel laureates Robert Shiller, Mike Spence, and Paul Krugman; Klaus Schwab, head of the World Economic Forum; and Larry Summers, former Chief Economist of the World Bank and Treasury Secretary under Bill Clinton. “I don’t think one can dismiss their arguments with ad hominem insults,” said Russell. As these thinkers have taken great pains to point out, the pending automation revolution is poised to eliminate countless jobs and displace workers.

On the potential for poorly designed AI to create problems for humanity as it grows to eventually exceed human capabilities in virtually every area, Russell made mention of other notable “fearful prophets,” including Alan Turing, the founder of computer science; Norbert Weiner, the mathematical pioneer of modern automation; Marvin Minsky, one of the “founding fathers” of AI itself; Bill Gates and Elon Musk—two of the “leading technologists of the last 50 years”—and “a great many of the current leaders of AI research.”

Advertisement

About the claim that risks shouldn’t be mentioned, lest it imperils progress, Russell said it’s helpful to look back at the history of nuclear power.

“In my view, it would be perfectly reasonable for a nuclear engineer to consider the risks of meltdown and propose the study of failsafe systems and other methods to prevent catastrophe,” Russell told Gizmodo. “IBM’s view is that we should hide the risks and not study methods to prevent them, in case that generates bad PR. In reality, the Chernobyl disaster was bad PR for the nuclear industry; in fact, it essentially destroyed the industry as well as rendering thousands of square miles of Ukraine uninhabitable. It resulted from putting short-term profits ahead of long-term safety.”

Advertisement

No doubt, the potential for an AI catastrophe is increasing with each passing breakthrough in the field, and as our technological infrastructure becomes increasingly fragile. This week’s “Petya” ransomware attackwhich was in reality a cyberattack designed to destroy sensitive data—was an excellent example, demonstrating just how vulnerable our world is becoming. The incident forced the shutdown of computers around the globe, putting a temporarily halt to flights, stock trading, and shipping. It even forced operators at the Chernobyl nuclear power plant to switch to manual radiation monitoring as a precaution. And this from a relatively dumb cyberattack. Imagine what will happen when these attacks are driven by machine intelligence. And that’s just one of a countless number of other potential scenarios.

IBM—at least for the time being—seems willing to ignore these harsh realities. But Francesca Rossi, a professor of computer science at the University of Padova, Italy, doesn’t see it that way, lauding IBM’s approach.

Advertisement

“I am extremely supportive of [Kenny’s] point-of-view—they are consistent with mine and reflect an approach to deploying advanced technologies like artificial intelligence in an ethical and practical manner,” Rossi told Gizmodo. “In my interactions at conferences and venues across the US and Europe, I’m increasingly finding a need to educate business and government leaders on the true potential of AI, both technologically and societally. Unfortunately, some uninformed parties—and some who should know better—are needlessly raising fears that are unfounded.”

Rossi believes it would be detrimental for the industry and for society if these “unfounded fears” resulted in restrictive policies or regulations that stifled responsible innovation and advancement.

Advertisement

“So I applaud [Kenny] for doing what leaders should do—presenting an informed, well-thought out perspective on a topic that is new and can sometimes be confusing,” said Rossi. “As he (as well as the congressmen) said, the private and public sector must work together to address these issues, and his actions this week were a good example of being true to his word.”

That the private and public sector should work together on this issue is most certainly something we can all agree on. But we must ensure that both sectors are working with the best interests of its citizens and consumers in mind. We’re still very much in the Wild West era of AI development, and as time passes we should expect—and advocate for—tougher regulations and standards. Corporations, and governments swayed by their big pockets, don’t often venture down these paths voluntarily.

Advertisement

Thankfully, there are groups who are taking the first important steps in this direction. The Future of Life Institute has compiled a list of guidelines, called the Asilomar AI Principles, to steer the safe and responsible development of AI, and the EFF has recently launched a new effort to track the progress of AI and machine learning. A number of private and academic institutions have similar goals.

Advertisement

IBM has a big voice, no question, but let’s be sure to pay attention to the rising chorus of concern.

[CNBC, Recode]

Advertisement