X
Innovation

IBM Researchers propose transparency docs for AI services

Like other technologies and industries, artificial intelligence will need to adopt supplier's declaration of conformity documents to build trust. How was that model built exactly?
Written by Larry Dignan, Contributor

IBM Research is proposing that artificial intelligence should come with a transparent document that outlines lineage, specifications and directions.

Under its Trusted AI effort, IBM Research published a paper that calls for supplier's declaration of conformity (SDoC) for AI services. This declaration would include information on performance, safety and security.

Time to break open AI's black box, and keep it open | What is AI? Everything you need to know about Artificial Intelligence | Machine learning? | Deep learning

In other industries, these documents exist and although they are voluntary in many cases these efforts often become standards. Think of the Energy Star or the U.S Consumer Product Safety Commission or bond ratings in the financial industry. A SDoC would outline the safety and product testing that has gone on with AI and information about the underlying models.

A team of IBM researchers wrote in a paper:

An SDoC for AI services will contain sections on performance, safety, and security. Performance will include appropriate accuracy or risk measures. Safety, discussed in as the minimization of both risk and epistemic uncertainty, will include explainability, algorithmic fairness, and robustness to concept drift. Security will include robustness to adversarial attacks. Moreover, it will list how the service was created, trained, and deployed along with what scenarios it was tested on, how it will respond to non-tested scenarios, and guidelines that specify what tasks it should and should not be used for.

In theory, these documents would also enable a more liquid AI service marketplace and bridge information gaps between consumers and suppliers. IBM Research said that the SDoC's should be voluntary.

Another outcome from SDoCs would be more trust in AI. A consumer trusts that the brakes will work on a car and that autopilot will operate well in an airplane. That trust is built on standardization, transparency and testing. AI services lack that trust today, and IBM Research noted that "consumers do not yet trust AI like they trust other technologies."

Free PDF download: Data, AI, IoT: The future of retail | Inside the black box: Understanding AI decision-making

IBM Research added:

Making technical progress on safety and security is necessary but not sufficient to achieve trust in AI, however; the progress must be accompanied by the ability to measure and communicate the performance levels of the service on these dimensions in a standardized and transparent manner. One way to accomplish this is to provide such information via SDoCs for AI services.

And SDoC for AI services would address questions like the following:

  • Does the dataset used to train the service have a datasheet or data statement?
  • Was the dataset and model checked for biases? If yes, describe bias policies that were checked, bias checking methods, and results.
  • Was any bias mitigation performed on the dataset? If yes, describe the mitigation method.
  • Are algorithm outputs explainable/interpretable? If yes, explain how the explainability is achieved (e.g. directly explainable model, local explainability, explanations via examples).
  • Who is the target user of the explanation (machine learning expert, domain expert, general consumer, regulator, etc.)
  • Was the service tested on any additional datasets? Do they have a datasheet or data statement? If yes, describe the testing methodology.
  • Was the service checked for robustness against adversarial attacks? If yes, describe the robustness policies that were checked, checking methods, and results.
  • Is usage data from service operations retained/stored/kept?
  • What is the expected behavior if the data distribution deviates from the training distribution?
  • What kind of governance is employed to track the overall workflow of data to AI service?

There is still a lot of discussion to be had on the SDoC concept, but such a movement would add more transparency to the AI market. After all, business leaders will have to manage models that they will have to trust yet don't fully understand.

Editorial standards