X
Innovation

IBM joins Linux Foundation AI to promote open source trusted AI workflows

As AI spreads like wildfire through the enterprise, IBM is stepping up efforts to promote open source tools for building fair, robust and explainable AI systems.
Written by Stephanie Condon, Senior Writer

AI is advancing rapidly within the enterprise -- by Gartner's count, more than half of organizations already have at least one AI deployment in operation, and they're planning to substantially accelerate their AI adoption within the next few years. At the same time, the organizations building and deploying these tools have yet to grapple with the flaws and shortcomings of AI-- whether the models deployed are fair, ethical, secure, or even explainable.  

Before the world is overrun with flawed AI systems, IBM is aiming to rev up the development of open-source trusted AI workflows. As part of that effort, the company is joining the Linux Foundation AI (LF AI) as a General Member. 

"AI, as it matures, needs to mature in a way that is something that the general public can put their confidence and trust in," Todd Moore, IBM's VP of Open Technology, told ZDNet. "Too often, what we hear is the AI is a black box, they don't understand how it got to its results, there's bias in the models, there needs to be more fairness... We've heard that loud and clear, and we felt it was time to help the industry move forward."

Also: How to differentiate between AI, machine learning, and deep learning (TechRepublic)

As a Linux Foundation project, the LF AI Foundation provides a vendor-neutral space for the promotion of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open-source projects. It's backed by major organizations like AT&T, Baidu, Ericsson, Nokia, and Huawei. 

IBM has a long history of supporting open source, and Moore explained why it's the right way to quickly raise the bar when it comes to building trustworthy AI. "To get all of us working together, iterating quickly, can cover a lot more ground than any single company can," he said. 

On top of that, supporting open source projects has the added benefit of expanding the market opportunity for AI vendors like IBM. The goal, Moore said, is to build tools that improve the credibility of AI -- and "to do it together, in a way that everybody can inspect and contribute to." 

By joining LF AI, IBM is aiming to bring trusted AI techniques to all of the foundation's projects. The company will work with LF AI's committees to create reference architectures and best practices for using open source tools in production. 

IBM has already spearheaded efforts on this front with a series of open-source toolkits designed to help build trusted AI. The AI Fairness 360 Toolkit allows developers and data scientists to detect and mitigate unwanted bias in machine learning models and datasets. The Adversarial Robustness 360 Toolbox is an open-source library that helps researchers and developers defend deep neural networks from adversarial attacks. Meanwhile, the AI Explainability 360 Toolkit provides a set of algorithms, code, guides, tutorials, and demos to support the interpretability and explainability of machine learning models.

Meanwhile, IBM has already started working on an informal basis with the LF AI Foundation, participating in events and contributing to projects like the Foundation Technical Advisory Committee's ML Workflow project.

The work of creating trusted AI is in nascent stages, but Moore said, "The good thing is it's started, the problem has been recognized. It's up to us to build the de facto standards and create the tools to help people."

Related stories:

Editorial standards