Back

Explainable AI: The next stage of human-machine collaboration

Published by Accenture Labs
AUGUST 31, 2018
RESEARCH REPORT
  • Many artificial intelligence applications today are effectively “black boxes” lacking the ability to “explain” the reasoning behind their decisions.
  • As AI expands into areas with large impact on people, such as health care, it will be critical to subject the technology to greater human scrutiny.
  • Explainable AI won’t replace human workers; rather, it will complement and support people, so they can make better, faster, more accurate decisions.
  • Use cases for Explainable AI include detecting abnormal travel expenses and assessing driving style, based on Accenture Labs research.

The AI stakes are getting higher

Some AI-based services and tasks today are relatively trivial – such as a song recommendation on a streaming music platform.

However, AI is playing an expanding role in other areas with far greater human impact. Imagine you’re a doctor using AI-enabled sensors to examine a patient, and the system comes up with a diagnosis demanding urgent invasive treatment.

In situations such as this, an AI-driven decision on its own is not enough. We also need to know the reasons and rationale behind it. In other words, the AI has to “explain” itself, by opening up its reasoning to human scrutiny.

The transition to Explainable AI is already underway, and within three years, we expect it to dominate the AI landscape for businesses.

Accenture Labs, in a new report, details how we can meet the need for more information by giving AI applications the ability to explain to humans not just what decisions they made, but also why they made them.

Read more here.