News

What is Explainable AI? A Brief History of Artificial Intelligence

Hani Hagras
Blog,
Hani Hagras – Chief Science Officer, Temenos

Artificial Intelligence (AI) has the potential to transform how banks operate and the services they provide for customers. But, not all AI is alike. In this blog, we explore what Explainable AI is, a brief overview on the history of artificial intelligence, and why Explainable AI is more suited to baking applications than other forms of “black box” AI.

What is Explainable AI?

Explainable AI, often shortened to XAI, is a segment of artificial intelligence that values clearly defined and understandable AI processes as highly as the results themselves. Builders of XAI programs seek to create machines that utilize transparent processes.

History of Artificial Intelligence

The term “artificial intelligence” was coined by John McCarthy in 1955 as part of his proposal for an academic summit he organized on the subject. McCarthy spent decades writing about, and experimenting with, artificial intelligence, and encouraged people to wrestle with the implications of this question: what if we could build a machine that could think like us? One of his most important quotes on the subject comes from his 1979 article “Ascribing Mental Qualities to Machines”:

Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem-solving performance.

To understand how we arrived at this revolutionary concept, it’s helpful to go back further to trace the history and evolution of machines.

1st Industrial Revolution

During the 1st Industrial Revolution (~1760-1840), the mass-proliferation of energy-transferring tools began. For example, heating water to create steam could then power a steam engine to rotate and move a vehicle. Instead of spending all day digging a ditch, it was more productive to build a machine that could do it in 1/10 the amount of time, for the rest of its useable life. The advances during this time significantly increase the productivity of individuals—but what if these tools could power other, more powerful and productive, machines?

2nd Industrial Revolution

During the 2nd Industrial Revolution (~1841-1950), humans found ways to develop tools which could power other, more powerful and productive, machines. Energy-transferring tools were used to create even more productive and specialized tools like the assembly line. At the turn of the century, significant advancements were made to not just create electricity and store electricity, but also to use it as a fuel source to power machines. However, these tools were still only able to perform one function. But what if we could build machines that were able to perform multiple functions?

3rd Industrial Revolution

Humans answered that question during the 3rd Industrial Revolution (~1951-2000). With the advent of mainframe computers and programming, humans were able to build machines that could perform multiple functions, or yield different outputs depending on their inputs (the classic “if” statement is a great example). With computers we created complex programs that could decide or arrive at multiple outputs given an almost endless amount of inputs. However, computers were still limited by one thing—the ability of their creators to create the logic of their program. What if we could create a computer that not only created its own logic, but could continue to optimize that logic overtime without human intervention?

4th Industrial Revolution

Currently we sit in the 4th Industrial Revolution (~2,000-?) where scientists and engineers are building programs that can build and learn on their own—artificial intelligence. Yes humans still have to create the initial logic of these programs, but the goal is the eventually computers can learn and improve fast.. While still in its infancy, the opportunities (and yes, the dangers) presented by AI seem almost endless, so long as humans are able to tweak the machines as they are being perfected. That’s why Explainable AI is so important.

Why XAI is more suited to Banking Applications than Black Box Artificial Intelligence

The use of complex AI algorithms like Deep Learning, Random Forests, Support Vector Machines (SVMs), etc., could result in a lack of transparency to create ‘black/opaque box’ models. Such black/opaque box models cannot tell why a system made a decision, they just provide an answer which the user can take it or leave it. The lack of transparency issues are not specific to deep learning, or complex models, where other AI systems, such as kernel machines, linear or logistic regressions, or decision trees can also become very difficult to interpret for high dimensional inputs.  

“Black box” risk arises when the steps algorithms take cannot be traced and the decisions they reach cannot be explained. Excluding humans from processes involving AI weakens their monitoring and could threaten the integrity and acceptance of the AI models. Furthermore, with “Black box”  models, it is very difficult to understand where the AI system “went wrong” and then make improvements.

The most prominent risks for banks using AI include bias in the data that is fed into AI systems, which could result in decisions that unfairly disadvantage individuals or groups of people, through discriminatory lending, for example. 

The European Banking Association recommends that “Banks must be able to fully explain any AI-driven decision that affects customers or other individuals who provide data , “”The steps leading to a decision should be able to be tracked from initial data gathering through to the actual decision”

The risks of trusting artificial intelligence today are too great. Very few people are willing to stake their organization or business on outputs that they can’t trust or verify. At Temenos, we see Explainable AI as the safe and powerful way for banks to transform their customer service and financial services operations

The use of “Black box”  box models AI is like having a very nice car which you buy working and functioning very well, however if it malfunctions, nobody can fix it as nobody knows how it works and hence if the mode fails, you have to retrain the model (i.e buy a new care). XAI on the other hand mimics a car which functions as well as the other car but  we clearly understand how it works and we can easily fix if it malfunctions  and we can always tune it to satisfy the user.  Hence the XAI system gives confidence to the model user (the car driver) and allows them to be robust to adjust the model to any unseen situation.

To learn more, watch our recent webinar on “Explainable AI – Not Just Desirable but Imperative” when Janet Adams and Hani Hagras discuss why banks need to be leveraging XAI to drive digital transformation.

Filed under:

Hani Hagras
Blog,
Hani Hagras – Chief Science Officer, Temenos