We stand with Ukraine

Explainable Artificial Intelligence Explained

Expert.ai Team - 5 November 2020

Many have said that questions of “why” are best left to philosophers, while scientists are best equipped to tackle questions of “how.” Though these fields of study are vastly different, they do frequently intersect, and at times collide, within the field of artificial intelligence (AI).

“I don’t know why it works, but it works” doesn’t actually work when we talk about AI. It may pass muster when discussing a recent home DIY project, but when it comes to technology that fundamentally impacts your business trajectory (and possibly the lives of others), you need a better explanation.

It’s this expectation that has given rise to explainable artificial intelligence (XAI). With transparency built into your process, both you and the end user better understand the results. This enables you to answer difficult questions like:

  • Why did the software used by the cardiac surgery department select Mr. Rossi out of hundreds of people on the list for a heart transplant?
  • Why did the autopilot divert the vehicle off the road, injuring the driver but avoiding the mother pushing a stroller who was crossing the street?
  • Why did the monitoring system at the airport choose to search Mr. Verdi instead of other passengers?

The need to understand the logic behind an AI system is clear, but we still need to go deeper to understand the benefits of XAI versus an unknown.

 

Where AI Struggles

AI presents many dilemmas for us, and the answers to them are neither immediate nor unambiguous. However, I believe we can and should address both the “how” and the “why” behind each instance. After all, one of the most frequent comments we hear is, “To trust AI, I have to better understand how it works.”

The wave of excitement around AI, no doubt fueled by marketing investments by major IT players, has seen itself level out. Expectations about the real uses and benefits of AI back in line with reality following after resounding software failures that promised to learn perfectly and automatically (and magically), and to therefore reason at the human level.

In many situations today, AI is not much different from traditional software. For example, a computer program is capable of processing input based on precise pre-coded instructions (the so-called source code), and returning an output. The difference compared to 20 years ago is the greater computing power (think supercomputers) and a much larger number of inputs (think big data).

 

Why Explainable AI Matters

To understand how AI works, you need to understand how the software works. You need to know whether or not it operates with any prejudice or bias. In other words, is an output the result of an input or is it predetermined (regardless of the input)?

The first aspect can be tackled by an XAI system, or explainable AI system. In this practice, the mechanisms and criteria by which the software reasons and produces certain results must be clear and evident. With this, you can begin to build a foundation of explainable knowledge.

Models that rely exclusively on machine learning tend to lack an appropriate level of transparency internally. As a result, the AI community has labeled these systems black box AI. This method completely counters the principles of explainable and ethical AI, and is why many issues of bias become amplified beyond control.

The goal of explainable and ethical AI is not just to mitigate bias, but understand where the bias stems from. Bias can be the result of several factors including who programmed the software, the dataset used to train the algorithm or even from a cyber attack. The quicker you can pinpoint the issue, the quicker you can begin to optimize the model and limit the negative consequences.

In the end, the “I don’t know why it works, but it works” approach is not a sustainable approach to AI. So if you’re going to build AI the right way, do it from the start with explainability atop your priority list.