The Value of Explaining How AI (Artificial Intelligence) Algorithms Work

In today’s era of Big Data, many companies are becoming more interested in artificial intelligence (AI) with its machine learning techniques. Algorithms generated by artificial intelligence can find relationships in the data that might never even occur to humans. AI can do this without human input, with Big Data fed into the computers, which train on the data, and automatically produce algorithms based on what the computer learned. When this is done, how the computers make their recommendations and predictions is often unknown because it all takes place automatically, without being programmed by humans.

Several years back, when the term Big Data was first coming into widespread use, it was said that computers would soon be developing all the algorithms automatically. Back then, I recognized the need for human intervention and I blogged my disagreement with the view that computer created algorithms should soon be widely used. Since then, the value of human input has become far more recognized and there have been many examples of the benefits of including human intervention with the algorithms. Additionally, without human intervention, algorithms generated by computer sometimes go terribly wrong. Thus, there can be a serious downside to letting computers develop the algorithms with humans completely left out of the process and no explanation available for how the computer created algorithms work.

The December 17, 2018 Bloomberg Businessweek article titled “Artificial Intelligence Has Some Explaining to Do” discusses this issue.  After mentioning several impressive things that AI can do (such as recognizing faces or translating languages), the article points out, “What it can’t always do is explain itself.”    According to the article, “It’s hard for humans to trust a system they can’t understand–and, without trust organizations won’t pony up big bucks for AI software.” The article says, “This is especially true in fields such as healthcare, finance and law enforcement, where the consequences of a bad recommendation are more substantial”.

The article points out that vendors such as IBM are incorporating “explainability”. As the article reports, “IBM’s software can tell a customer the three to five factors that an algorithm weighted most heavily” and it can tell users where the data came from, so the possibility of biased data can be evaluated. But, the article also mentions the tradeoff: “explanations offered by some of these AI software vendors may actually be worse than no explanation at all, because of the nuances lost by trying to reduce a very complex decision to a handful of factors.”

As I see it, AI explainability is highly worthwhile, while the level of value from those nuances may or may not be crucial. I say this as someone whose work early in my career was that era’s version of what today is called a data scientist. My work back then let me see both the value, as well as the limitations, of algorithms. I saw tremendous benefit in going beyond the algorithms and adding a broader understanding that can help explain the data. In fact, much of my life’s work has been devoted to enhancing the explainability of business success that comes from making the right kinds of strategic moves. My Winning Moves website is very oriented toward explaining what underlies successful business growth strategies and successful change. This kind of understanding can be very valuable in assessing what approaches are likely to work well.

Explainability is so important because algorithms developed by the computer are only as good as the quality of the data on which AI trained. If the data does not include enough instances of some attribute of interest or of some group of people you want to analyze, the computer won’t be able to make good predictions about those areas. In this sense, AI is much like older data intensive areas, such as traditional survey research, where it has been important to design a study so the right kind of data will be available. Human intervention can help AI get the right kind of data and can evaluate how the kind of training data used might be impacting the results.

Nonetheless, even though human intervention and explainability are so valuable, in some cases, it might still be worth exploring fully automated AI to see how well it does. But, fully automated, unexplained algorithms should be pursued cautiously, especially when the consequences of getting it wrong are serious.  There are good reasons why data analytics has often benefited from combining human input with computer algorithms.

So, in conclusion, adding more explainability to AI has value. It supports the kind of human intervention that has helped get more value from the data.

 

If you’d like a thought provoking understanding of what underlies successful business growth strategies and successful change, or if you’d like insights about the value of AI explainability, feel free to contact us for presentations or consulting.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *