X
Innovation

Can IBM possibly tame AI for enterprises?

Researchers from IBM argue AI is a bit too unruly for enterprise use, as it's based on "probabilistic" programming methods and "messy" data. Can the Big Blue approach to software lifecycles and "business processes" tame the beast?
Written by Tiernan Ray, Senior Contributing Writer

Artificial intelligence is not yet ready to tackle "business processes."

That's the message from International Business Machines in a research paper offered up this week from Big Blue's scientists at its IBM Watson and Almaden Research Center units.

The paper raises lots of hopeful suggestions, but also implies substantial questions as to whether AI is just too unruly and wild at the moment for a Big Blue to tame it.

Also: IBM launches pretrained Watson packs for industries

The proposal of IBM's researchers' is that a lot of stages of machine learning need to be considered carefully, including how a manager should "set goals" for the neural network model, how the "data pipeline" should be constructed for the examples that serve as input to the neural network, and how to constantly "iterate" an AI model to improve it.

Special concerns are things that matter to regulated industries, such as the "lineage" of the data: What is the "legality" of the data being used?

The paper, Characterizing machine learning process: A maturity framework, is posted on the pre-print arXiv server, and is authored by Rama Akkiraju, Vibha Sinha, Anbang Xu, Jalal Mahmud, Pritam Gundecha, Zhe Liu, Xiaotong Liu, and John Schumacher.

The challenge of AI for enterprises is the essential difference between machine learning programming and traditional software programming, says IBM: "While traditional software applications are deterministic, machine learning models are probabilistic." Moreover, neural networks are developed using "messy data," a fact not quite suited to enterprises.

Also: IBM launches tools to detect AI fairness, bias and open sources some code

Nobody's been doing anything about this, says IBM: "Academic literature on machine learning modeling fails to address how to make machine learning models work for enterprises."

To achieve a certain maturity more suited to enterprise use, IBM's scientists propose bringing machine learning in line with the vast literature on "application lifecycle management" and the like, while extending the meaning of such terms to fit the novel qualities of AI.

ibm-ml-lifecycle.png

IBM proposes the various stages of a machine learning "lifecycle" that a company must be prepared to work through, on an ongoing basis.

Specifically, the researchers draw upon work by Watts Humphrey, who in the 1980s defined the "capability maturity model" for software. CMM was a kind of map of the stages through which software travels in an organization. It begins with the "immature" phase, when the corporation has no control of what it's doing with a program, and concludes with the happy stage of an organization being able to constantly "optimize" a program.

The most original contribution of the work is the researchers' suggestion that neural networks should be developed with an eye to the particularities of a given industry. To find out the "business use case" of AI, they write, may require a company to "customize general purpose machine learning models with industry, domain, and use case specific data to make them more accurate for specific situations."

IBM is obviously venturing into a thicket of thorny issues with the paper. There are numerous aspects of machine learning, especially in its deep learning incarnation, that cannot easily be reconciled to the neat prescriptive of the capability maturity model.

For example, IBM proposes that an "AI Service Data Lead" within a company oversee, at the beginning of the work, what kind of "ground truth" labels are attached to data fed to the machine. But much "unsupervised" machine learning tries to move away from ground truth in the design of neural networks.

Also: IBM Researchers propose transparency docs for AI services

Similarly, a "Training Lead" within the organization is supposed to work with the data lead on the "feature extraction" stage of neural network development, including things such as coming up with "tokenizers." Again, much of deep learning is about automatic feature extraction, as opposed to this kind of hand-crafted work.

Perhaps the most daunting prospect is that enterprises, in IBM's view, are supposed to be responsible for making sure the neural network is free of bias, a task about which the entire AI community is scratching its head. Among the duties of an "offering manager," who is responsible for developing the neural network, is "ensuring that the model is free of undesirable biases, fair, transparent."

In the end, although the authors sound confident in their suggestions, it seems like machine learning may be just too wild and woolly for the kind of expectations Big Blue proffers.

As they write in the conclusion to the report: "Another reason for hesitation in adopting AI models is that organizations find them to be black-boxes and non- transparent. This is especially true for models trained with deep learning techniques."

Best gifts for co-workers under $50 on Amazon

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:

Editorial standards