April 28, 2022
-
4
minute read

Why Model Validation Can End the AI “Explainability Crisis”

Perspectives

In March of this year, Fortune magazine published an article about the so-called “explainability crisis” in artificial intelligence.

A key and long standing critique against AI practices relates to the “black box” problem. Meaning that, other than inputs and outputs, it is near impossible to “see” or explain how decisions are reached by machine learning models. This lack of explainability makes it incredibly difficult for relevant stakeholders to understand the underlying mechanisms of the AI decision-making and the associated risks to impacted populations.

These criticisms have prompted regulators to invest in ways to make AI decision-making more transparent. This demand for AI transparency has been dubbed the “explainable AI” movement.

Explainability has been referenced as a guiding principle for AI development, including in Europe’s General Data Protection Regulation. Explainable AI has also been a major research focus of the Defense Advanced Research Projects Agency (DARPA) since 2016. After years of research and application, however, attaining the goal and practice of trustworthy and controllable AI has proved difficult to achieve. 

As such, the supposed ‘unexplainability’ of AI has made people question whether artificial intelligence can be used in high-risk scenarios, especially in key areas such as healthcare, finance, and government.

So, if the ‘explainability crisis’ can’t be solved with ‘easy explanations’ - what is the solution?

Validation rather than explanation

Rather than striving to further de-black box AI to companies and businesses by using failure-prone or ambiguous methods, Kahn argued in favor of another answer - making the algorithms stronger and more performant.

Researchers from Harvard, MIT, Carnegie Mellon, and Drexel University discovered that “in real-world settings, most people and companies using algorithms simply picked the explanation that best conformed to their pre-existing ideas.” Further detailing what fuels the explainability crisis - much of what we deem as explanations are not founded on sound reasoning. 

The researchers concluded that rather than focusing on explanations, companies should test their models. “In other words, what we should care about when it comes to AI in the real world is not explanation. It is validation.”

After all, if AI is finding increasing footholds in essentially every industry, these are systems and platforms that need to function at a high (and reliable) level in the first place. 

If companies could have better built-in systems for ensuring that their “black box” algorithmic decision systems outputs are robust and unbiased, they would be better trusted by end users and regulators alike. 

What’s more, explainability entails risks - explanations may be framed in a misleading way, may be deceptive, or be exploited by nefarious actors. Explanations can be used to infer information about the model or the training data, resulting in issues related to privacy.

Explainability may also make it easier for proprietary models to be replicated, opening up research to competitors. Further methods for both documenting and mitigating these risks are needed - and this is where Robust Intelligence can help.

Robust Intelligence as a method for validation

Robust Intelligence’s RIME offers a solution to model validation. The RI Model Engine (RIME) performs stress-testing on models before they go into production in order to see if any errors or biases exist within the model, and then continuously tests the model once it is in production to ensure no new ones arise.

In other words, rather than having companies try to explain to their consumers what goes into algorithmic decision making - an explanatory process which is confusing and subject to error even in the best of times - what if they simply ensured their models were robust in the first place and stayed that way?

This would result in fewer mistakes along the pipeline, and catch production hazards and errors before they occur.

If you want to know more, request a demo!

April 28, 2022
-
4
minute read

Why Model Validation Can End the AI “Explainability Crisis”

Perspectives

In March of this year, Fortune magazine published an article about the so-called “explainability crisis” in artificial intelligence.

A key and long standing critique against AI practices relates to the “black box” problem. Meaning that, other than inputs and outputs, it is near impossible to “see” or explain how decisions are reached by machine learning models. This lack of explainability makes it incredibly difficult for relevant stakeholders to understand the underlying mechanisms of the AI decision-making and the associated risks to impacted populations.

These criticisms have prompted regulators to invest in ways to make AI decision-making more transparent. This demand for AI transparency has been dubbed the “explainable AI” movement.

Explainability has been referenced as a guiding principle for AI development, including in Europe’s General Data Protection Regulation. Explainable AI has also been a major research focus of the Defense Advanced Research Projects Agency (DARPA) since 2016. After years of research and application, however, attaining the goal and practice of trustworthy and controllable AI has proved difficult to achieve. 

As such, the supposed ‘unexplainability’ of AI has made people question whether artificial intelligence can be used in high-risk scenarios, especially in key areas such as healthcare, finance, and government.

So, if the ‘explainability crisis’ can’t be solved with ‘easy explanations’ - what is the solution?

Validation rather than explanation

Rather than striving to further de-black box AI to companies and businesses by using failure-prone or ambiguous methods, Kahn argued in favor of another answer - making the algorithms stronger and more performant.

Researchers from Harvard, MIT, Carnegie Mellon, and Drexel University discovered that “in real-world settings, most people and companies using algorithms simply picked the explanation that best conformed to their pre-existing ideas.” Further detailing what fuels the explainability crisis - much of what we deem as explanations are not founded on sound reasoning. 

The researchers concluded that rather than focusing on explanations, companies should test their models. “In other words, what we should care about when it comes to AI in the real world is not explanation. It is validation.”

After all, if AI is finding increasing footholds in essentially every industry, these are systems and platforms that need to function at a high (and reliable) level in the first place. 

If companies could have better built-in systems for ensuring that their “black box” algorithmic decision systems outputs are robust and unbiased, they would be better trusted by end users and regulators alike. 

What’s more, explainability entails risks - explanations may be framed in a misleading way, may be deceptive, or be exploited by nefarious actors. Explanations can be used to infer information about the model or the training data, resulting in issues related to privacy.

Explainability may also make it easier for proprietary models to be replicated, opening up research to competitors. Further methods for both documenting and mitigating these risks are needed - and this is where Robust Intelligence can help.

Robust Intelligence as a method for validation

Robust Intelligence’s RIME offers a solution to model validation. The RI Model Engine (RIME) performs stress-testing on models before they go into production in order to see if any errors or biases exist within the model, and then continuously tests the model once it is in production to ensure no new ones arise.

In other words, rather than having companies try to explain to their consumers what goes into algorithmic decision making - an explanatory process which is confusing and subject to error even in the best of times - what if they simply ensured their models were robust in the first place and stayed that way?

This would result in fewer mistakes along the pipeline, and catch production hazards and errors before they occur.

If you want to know more, request a demo!

Blog

Related articles

February 28, 2024
-
5
minute read

AI Cyber Threat Intelligence Roundup: February 2024

For:
January 16, 2024
-
5
minute read

AI Security Insights from Hackers on the Hill

For:
March 28, 2024
-
4
minute read

AI Governance Policy Roundup (March 2024)

For:
No items found.