IBM Watson: Not So Elementary

What’s next for Big Blue’s AI phenom? An extended interview with Watson’s human master, David Kenny.
Photograph by Winni Wintermeyer for Fortune

IBM Watson: Not So Elementary

Five years after its Jeopardy! victory, IBM’s cognitive computing system is through playing games. It’s now a hired gun for thousands of companies in at least 20 industries. A Q&A with the Watson boss.

David Kenny took the helm of IBM’s Watson Group (IBM) in February, after Big Blue acquired The Weather Company, where Kenny had served as CEO. In the months since then, the Watson business has grown dramatically, with well over 100,000 developers worldwide now working with more than three dozen Watson application program interfaces (APIs). Fortune Deputy Editor Clifton Leaf caught up with Kenny in mid-October, when IBM Watson’s General Manager was in San Francisco, getting ready to open Watson West—the AI system’s newest business outpost—and to launch the company’s second World of Watson conference, a gathering of its burgeoning ecosystem of partners and users, in Las Vegas on Oct. 24. Below, an edited Q&A.

FORTUNE: We hear a lot of terms on the AI front these days—“artificial intelligence,” “machine learning,” “deep learning,” “unsupervised learning,” and the one IBM uses to describe Watson: “cognitive computing.” What are the distinctions?

KENNY: Deep learning is a subset of machine learning, which essentially is a set of algorithms. Deep-learning uses more advanced things like convolutional neural networks, which basically means you can look at things more deeply into more layers. Machine learning could work, for example, when it came to reading text. Deep learning was needed when we wanted to read an X-ray. And all of that has led to this concept of artificial intelligence—though at IBM, we tend to say, in many cases, that it’s not artificial as much as it’s augmented. So it’s a system between machine computing and humans interpreting, and we call those machine-human interactions cognitive systems. That’s kind of how it layers up.

As for what we would call unsupervised learning—which is to say, we’re not training it to process but it’s beginning to learn on its own—that is moving more in the direction of what some consider true artificial intelligence, or even AGI: artificial general intelligence. I would say we’re at the early stages of that. Some parts of Watson do that.

FORTUNE: And it takes a lot of computing power to accomplish that.

KENNY: It takes enormous, enormous amounts of computing power to do that because you’ve got to leave Watson running at all times, just like the human brain, and that’s why I believe cloud computing has been such an important enabler here because prior to cloud computing—where you could access many machines at the same time—you were limited by a mainframe. (Of course, IBM made the biggest ones, which is why Watson was able to do Jeopardy! five years ago.) But cloud is part of why I think we’re seeing a real acceleration in AI right now.

"Jeopardy!" & IBM Man V. Machine Press Conference ‘Jeopardy!’ contestants Ken Jennings, left, and Brad Rutter competed against IBM’s Watson in 2011.Ben Hider — Getty Images

FORTUNE: Nor is it just in the cloud, right? Watson is increasingly “embedded” in the real world—in the Internet of Things. You have some 4,000 clients and 1,400 partners in Watson’s IoT business alone, right?

KENNY: Yes, and this is part of how I came to IBM through the Weather Company. We have sensors everywhere: we have sensors in cars, we have sensors on things that are moving through logistics—boxes that are being shipped, for instance. We have sensors in factories and in traffic systems. And I think we’re all wearing some these days. These sensors—and our mobile devices—can compute something to give us knowledge or give us insights, right at the point we need them. And those sensors can interact (by way of Bluetooth or some other communication device) with the cloud, where a computation can happen and the answer can come back to you. So it all feels that these things are thinking for us, and in fact, they’re accessing a brain that is the nearest data center.

So, these same sorts of interactions get embedded in applications all over the place. Take the case of cancer treatment. Using Watson For Oncology, a cancer doctor can ingest a patient’s records in real time. And then Watson can find clinical trials that have started (or are about to start) that best match that particular patient. [To date, Watson has digested more than 26 million medical and scientific articles and boned up on nearly 3,000 clinical trials from clinicaltrials.gov, the federal government’s public database. And the company announced recently announced that it’s partnering with lab-testing company Quest Diagnostics (DGX), New York’s Memorial Sloan Kettering Cancer Center, and the Broad Institute of MIT and Harvard to make genomic analyses by Watson available to cancer patients and physicians nationwide.]

FORTUNE: So how did you train Watson to read something as dense as a medical paper?

KENNY: Of course, I wasn’t here at the time that was done, but it starts with knowledge extraction: reading documents, finding common phrases, associating those together. It does the same with paragraphs. Then it has to get corrected. The human annotation is critical here: Out of the gate there’s no way that I would trust the system to do unsupervised learning and just find the patterns on its own. You literally tell Watson, “Yes, that meant this, yes those go together. Yes, you have that right, or, no, you don’t.”

And when you tell the system the “no’s,” it re-weights its algorithms until it gets to a point where it would have produced the correct answer. And it gets better over time.

FORTUNE: You’re training the system with human expertise and tons of data.

KENNY: Yes—and the next qualification I should make is right now there isn’t a single Watson. There’s Watson for oncology. There’s Watson for radiology. There’s Watson for endocrinology…for law…for tax code…for customer service. The reason for that is we can then train the systems more precisely with the right data. And we can partner with experts—so Memorial Sloan-Kettering, in the case of oncology, or the American Heart Association with cardiology.

These systems do perform better because they’re more focused on a specific domain. It’s the same way people work. We don’t tend to ask a journalist for cancer advice, and we don’t generally ask an oncologist for real estate advice. People tend to specialize in things and know them well. It’s the same thing with AI. Mathematically, you’re just more likely to get the answer if you’re clear about the domain to begin with.

FORTUNE: Let’s focus on Watson for Oncology, for a moment. One of the challenges of a disease like cancer is that its progression, particularly in later stages of metastasis, can often look like an emergent system. The disease often doesn’t follow a linear progression—even in the same tumors, certain cell populations can have radically different genetic mutations, meaning that they can respond differently to treatment. How does Watson learn to master chaos?

KENNY: Let me offer an example of that. It’s the work I did in weather—which is a chaotic system as well. So, you may have noticed that weather forecasts have gotten more accurate the last few years, and that’s been because of machine learning. So, what’s been important is training the system after each prediction that didn’t come true. For instance, you said it was going to rain on a particular day and it didn’t; it actually rained four miles north or four miles south. So you put that new fact in, and then the system automatically reweights all the algorithms—because there are algorithms for every level of the atmosphere—to pinpoint what it got wrong, and then that improves it for the next time. Now, the exercise isn’t simple: Weather is the atmosphere. It’s 100 kilometers thick, it covers the whole earth, it’s fed by the oceans, and it’s always in motion.

But what’s important is that you’re constantly learning on the negative so that the algorithms reweight without losing what was the positive, and that’s how it gets higher and higher confidence in its predictions.

Bob Picciano (left) of IBM with David Kenny (right) at the IBM Insight Conference in 2015. Kenny was still with The Weather Company at the time.Courtesy of IBM

FORTUNE: And then there are the reams and reams of new data to feed into your models.

KENNY: So, as we’ve added so many more sensors on smartphones and watches and windshield wipers and airplanes, we’ve just increased the amount of data that we have, and we’re capturing it more often. We went from a model that was run every six hours to a model that’s run a minimum of every 15 minutes; we went from having data at 2 million locations to having data at up to 3.2 billion locations.

That’s just a massive explosion in computation. As a result, you begin to see some of the chaos—or the butterfly effect—more quickly. And so you can begin to put narrower ranges of possibilities around what’s going to happen. The best examples are tropical storms. We just had Hurricane Matthew. There was a great deal of confidence in its path. There was a serious risk that the eye would have come inward—fifty miles further and it would have been far more catastrophic. But there was a high confidence interval of what was going to happen weeks out. When the storm was in Africa, it could be watched, and you could see it coming. I would say the data proves that the 5-day forecast today is as accurate as the 24-hour forecast was a decade ago. So there has been a big leap, and there is another big leap happening now in that particular field.

Take all of these lessons and apply them to the human body, apply them to a cancer system. As we get more data, as we get more sensors, as people are better able to understand what’s going on in their own bodies every day—as those get computed, I do believe we’ll have a higher ability to predict. In the short-run, what’s happening in cancer and in other diseases is that we’re better able to match your particular chaos—your particular set of systems—to someone who looked like you in the past. And that can help you get to a diagnosis and a course of treatment faster.

I think the goal here is that, eventually, these systems will predict disease progression in time to actually take preventive action, which I think is better for everybody. But at least at minimum, I hope, in the near-term, we’re able to better diagnose and then give people better treatment.

FORTUNE: Obviously, having the ability to predict and prevent cancer is the ultimate goal. But even on a more near-term basis, if Watson could look out, say, 4 weeks or an entire growing season and be able to help farmers predict weather outcomes or warn of a crop freeze—the potential economic value in that is enormous, I imagine. Is that where the crown jewels are, just being able to accurately predict the weather a month or two out?

KENNY: Actually I think there are two crown jewels that I’m excited about. One is, yes, that we’re able to predict all things further out and we’re able to predict with greater precision so that you can take actions. Even today, we’re getting to the point where three- and four-week forecasts are shareable. You can’t plan your wedding around them. But you can begin to understand extended cold spells, extended warm spells, droughts—and all of these things help with water management and agriculture.

We are getting better predictive value with energy planning as well. And I would say we’ll see some of this in other areas—traffic systems, logistics systems, et cetera.

So, that’s one thing—better predictions further out. The second thing I’m very excited about is access. If I go back to weather, there are only 30 nations that can afford a weather service. In America, we have a national weather service, we have NOAA [National Oceanic and Atmospheric Administration], we’ve got a private sector offering forecasts. Not so true in most of Africa, most of Latin America. Not so true in big parts of Asia. So, the ability to add low-cost sensors, connect them to a global system, and provide—even what we’ve enjoyed for 50 years in the U.S.—even a 5-day forecast matters for safety and actually matters for agriculture. It matters for fishermen. So, all these things can begin to lift.

Same thing would be true in something like melanoma detection. There are now ways to take photos and have a reasonable analysis of whether that is something that a doctor should take a look at or not. If you live in India, there are only 1,000 or so oncologists for a billion people, having other tools lifts the population.

For people with diabetes, data from wearable sensors is helping us get better at predicting when someone’s got an issue and should be woken up, which is very important for the parents of juveniles.

IBM’s not doing this alone, I should point out. IBM’s doing this in partnership with professionals, endocrinologists, and companies like Medtronic who are doing the sensors. But overall, the ability of technology to make expertise somewhat accessible to the world, I think, is going to lift all of humanity.

AT CES, IBM AND MEDTRONIC UNVEIL PLANS FOR COGNITIVE DIABETES APP IBM Chairman and CEO Ginni Rometty (right), and Medtronic CEO Omar Ishrak unveil the latest advances in applying cognitive computing to diabetes management during CES 2016.Alan Rosenberg — Feature Photo Service for IBM

FORTUNE: AI is getting to be a crowded space these days. How is IBM planning to lead?

KENNY: I would say that the other four biggest players—Google (GOOGL), Microsoft’s Bing and Azure (MSFT), Amazon (AMZN), and even Facebook (FB)—tend to embed it in their own applications. The distinctness of the Watson approach has been to create software that you can embed in other people’s applications, and these are especially used by the companies that don’t feel comfortable putting their data into a single learning system—particularly one that’s connected to a search engine—because in effect that commoditizes their intellectual property and their cumulative knowledge.

So our approach has been to create AI for private or sensitive data that is best reserved for the entities that own it and isn’t necessarily ever going to be published on the public Internet.

FORTUNE: Speaking of the public, your World of Watson conference opens in Las Vegas on Oct. 24. Given your focus on keeping companies’ proprietary knowledge in house, how do you create a broad, interactive ecosystem around Watson technologies?

KENNY: This conference is an extension of what has been our data and analytics conference. And there are tens of thousands of clients who come because they want to stay on the front edge of that. The aim is to help people realize that Watson isn’t just for a handful of big tech companies. Most of the morning discussions in Las Vegas are going to be about all these partners—all these other companies that have built something using Watson, sharing their stories to inspire people. The afternoon is going to be, I think, more hands-on helping people understand what we’re doing and what they can build with it. And then we follow that with a November conference in California, specifically for developers, which will be more technical.

FORTUNE: You have about 4,000 clients in just the Internet of Things space. How many different companies and individuals are using this technology for their businesses overall?

KENNY: Well, listen, I mean we don’t give those numbers, but I would stay, IoT is just one subset of a much bigger pull. I would also say it’s growing at a fast rate. You know what’s interesting about this for us is that IBM has historically had an incredibly strong sales and services organization—very face-to-face, and that’s still quite important. But increasingly, people just come on and they use the AI services to develop something, and if it actually becomes a product, they rent it from us. So we’re actually seeing this sort of self-serve approach through our digital channels and our developer platform, which has been awesome because it’s just bringing a lot more people onto Watson every day.

FORTUNE: So how long before accessing Watson will be as easy for the ordinary consumer around the world as accessing Google?

KENNY: Think about what Netscape or Prodigy or CompuServe looked like in the mid-’90s. Just as the browser got simpler and became part of our everyday lives and we began to have it installed in every device, we will find some access to AI working that way. And yes, I do believe that will be in the not so distant future. But do I envision there will be an app Watson like a Google Toolbar? No, I don’t because in this case, I would say most of where people are using us is a combination of public data and data that they don’t necessarily want to make public.

It’s a little bit different than where search went, but I think you’re going to find it in everyday life, and you won’t even notice it because the interface will get simpler. The fact that its natural language—that we talk and it can understand and go get something—makes it so much easier than typing in lines of code or typing in precisely the right keyword.

FORTUNE: Speaking of data that someone might not want to make public, one potential application where it seems that Watson might have an edge is homeland security. Earlier, when you talked about solving chaotic problems, this is key for understanding asymmetric threats or risks—whether it concerns companies or the government. How much of that is Watson focused on? And do you have a separate division for that?

KENNY: I’m not going to go into specifics. I would say that generally these machine learning systems—deep learning systems—are better at identifying aberrations: at identifying prior risk and then finding new risks again. So I think that they are great tools for helping you find risk in any sort of system, whether it be a banking system or insurance risk. You can have machines help underwriters do what they do every day.

Throughout IBM, we have vertical groups that focus on specific use cases. Historically, many of them are learning how to use Watson. On this particular topic, one of the things I am excited about is the launch of Watson for cybersecurity, which is coming shortly out of our security business. And it’s exactly what you’re describing. It’s understanding unknown threats, understanding when things break through. Learning those quickly and then immediately disseminating them to specific clients.

FORTUNE: Another area where Watson has made a big push is in images. IBM has made huge investments in medical imaging companies in recent months. In the race to AI dominance, is understanding images an area where Watson has a leg up over its rivals?

KENNY: We communicate in images. I think Facebook is also doing good facial recognition in terms of recognizing the faces in pictures that you post. But yes, we certainly believe that we have a big “up” on what we called unstructured data, including images. Basically, it’s our estimate that probably 70% of the world’s information is unstructured—therefore it’s not easily accessible on the Internet. For Watson, X-rays and MRIs are probably top of the list that we’ve started there.

This is why I began by explaining deep learning versus machine learning. In order to understand those, you basically need to break that image into pixels. You need layers and layers and layers because, when you’re looking at a flat image, there are layers of depth underneath it to really understand. And if you’re doing something as critical as finding a fracture on an X-ray or finding an aberration in an MRI, you really need to do that deeply.

But once you solve that, you can solve that for architectural plans, you can solve that for engineering documents, you can solve that for diagrams, you can solve that for factory floor plans.

So, there are all sorts of new use cases that are coming from this—and many of the owners of this data won’t want their information public.

For more on how IBM’s Watson is impacting the healthcare industry, watch this Fortune video:

[fortune-brightcove videoid=5175158360001]

FORTUNE: Facial recognition, reading x-rays, language processing, natural language processing, language text—these are the sorts of applications we’re aware of now. What’s the next sphere for Watson? Can it master emotional intelligence? And related to that: What’s the next place where a hoped-for big breakthrough has yet to happen?

KENNY: Oh, there are so many.

FORTUNE: Alright, then, what excites you in other words? You’re a guy who loves to solve things.

KENNY: Yes, you’re right about that. So, I would say, let me break it down into three fields. So, one is all the rules-based systems. We call that compliance, right? So compare and comply.

There is such a regulatory burden on companies today [see Fortune’s cover story on red tape here]. So, wherever there’s a problem, we have rules, right? So, Wells Fargo (WFC) had a problem, so now there will be more rules. And now all things will follow these rules. So, if we can get to the point that a Watson system can quickly understand those rules, then it can help everyone be compliant with the rules more quickly and more confidently.

I think the second area I’m very excited about is discovery. The beginning is to find things we haven’t thought of, right? For example, we might start with exploration: searching on all your internal data. “Discovery” is then recognizing that there’s a pattern in that data we haven’t seen before. To give you an example, we did a project years ago with the Culinary Institute to understand what flavors went together. From understanding those, essentially, chemical relationships, Watson helped them create new recipes, and those recipes are really interesting. They’re kind of inspired. When these systems don’t merely find patterns, but also begin to find new answers, I think that’s going to help a whole number of fields.

This is related to what I was saying about weather. When I run models on the atmosphere, for example, I find the data we’re missing is the data on the bottom of the oceans. If you really want to answer the question of knowing weather conditions a season out—or you really want to understand the total effects of climate change—it’s important to understand what’s happening in the oceans. The models make that really clear. And so, we can put an economic value on this question: “What is the value of understanding what’s going on at the bottom of the oceans?”

Then, the answer to that question can actually drive the right investments, public or private, to get that data, which ultimately will help us make all these better decisions. Again, I think by helping us understand more precisely the unknown—and putting a value on knowing the unknown—we can serve better, more creative answers. The next step for Watson is not just learning on its own, but it actually asking the questions.

FORTUNE: When you talk about forming hypotheses from its own observations, learning, asking questions, you’re really talking about Watson as a scientist. I mean that’s what scientists do, right? They form a hypothesis and then devise experiments to disprove those hypotheses, hoping to arrive at some conclusion.

KENNY: Well, that’s exactly right. And I would say early on it’s doing small things like pattern recognition and probabilistic determination. But in the future, if Watson goes where I think the field goes, it begins to become more and more of a real collaborator in the scientific process.

A version of this interview appears in the November 1, 2016 issue of Fortune.