January 23, 2023
-
4
minute read

Robust Intelligence Recognized in Gartner’s 2023 Market Guide for AI Trust, Risk and Security Management

The adoption of AI is accelerating rapidly across industries and use cases, ushering in a new era of intelligent applications that is beginning to reshape traditional software. Regardless of where companies are in the adoption curve, leaders widely recognize the potential that AI can have on their business.

However, as data science teams begin to develop and deploy machine learning models, they are introducing AI risk that can have an outsized business impact when models fail. In order to effectively mitigate risk, companies need to continuously validate their data and models. This shift in thinking is beginning to spread among the data science community, as evidenced by companies building strategies and engineering paradigms that ensure machine learning integrity.

Gartner names Robust Intelligence a representative vendor

Gartner also recognizes the importance of managing AI risk and is doing its part to help leaders better understand and navigate AI-related challenges. The analyst firm recently published an update to their Market Guide for AI Trust, Risk and Security Management (TRiSM), which presents a framework comprised of four software categories: explainability/model monitoring, privacy, ModelOps, and AI application security.

Garnter AI TRiSM components

Robust Intelligence has been selected by Gartner analysts as a representative vendor in the explainability/model monitoring category. We’re honored to be included on this shortlist of vendors, however we take a fundamentally different approach to AI risk than other companies in the explainability or model monitoring space.

In a previous post, we discussed some of the gaps we see in model explainability. At its core, we believe that for AI to be used for critical business decisions, companies need to focus on creating better built-in systems for ensuring that their “black box” algorithmic decision systems outputs are robust and unbiased.

When it comes to model monitoring, there are significant issues with the approach vendors have taken, the most significant being that monitoring only surfaces performance issues once models have failed in production. This reactive approach is limited to a small set of KPIs to evaluate model output and does not help identify vulnerabilities in development. Model monitoring often leads to 2 AM firefighting calls, endless hours customizing internal tooling, and silent errors bringing risk to business and customers.

Rather than attempting to de-black box or reactively alert on select metrics, Robust Intelligence takes a proactive, test-based approach of continuous validation across the MLOps pipeline to instill end-to-end machine learning integrity.

The merits of continuous validation

ML model failure is ultimately a symptom of any number of issues including corrupted data, model drift, biased decisions, liabilities, software supply chain vulnerabilities, and adversarial input. These issues can typically be prevented proactively with the use of model testing conducted throughout development and production, not only after a model has been operationalized.

Robust Intelligence automatically runs hundreds of tests to measure a model’s robustness to performance, fairness, and security failure, the output of which provides guidance on how to optimize models across the lifecycle. We also stop bad data from entering production models with our AI Firewall. This combined strategy provides continuous validation of models and data, enabling data science teams to mitigate AI risk.

On the Gartner blog, Distinguished VP Analyst Avivah Litan writes “The AI TRiSM market is still new and fragmented, and most enterprises don’t apply TRiSM methodologies and tools until models are deployed. That’s shortsighted because building trustworthiness into models from the outset – during the design and development phase – will lead to better model performance.”

Continuous validation is already a software engineering best practice, and a similar approach should be applied to machine learning to increase velocity and reduce risk. By integrating automated tests into a CI/CD workflow, companies can instill integrity into their ML systems by identifying performance, fairness, and security issues early.

If you’d like to learn about how leading companies are working with Robust Intelligence to deliver ML integrity through continuous validation, we invite you to talk with an expert.

If you’re a Gartner client, you can access the Market Guide here.

Disclaimer: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

January 23, 2023
-
4
minute read

Robust Intelligence Recognized in Gartner’s 2023 Market Guide for AI Trust, Risk and Security Management

The adoption of AI is accelerating rapidly across industries and use cases, ushering in a new era of intelligent applications that is beginning to reshape traditional software. Regardless of where companies are in the adoption curve, leaders widely recognize the potential that AI can have on their business.

However, as data science teams begin to develop and deploy machine learning models, they are introducing AI risk that can have an outsized business impact when models fail. In order to effectively mitigate risk, companies need to continuously validate their data and models. This shift in thinking is beginning to spread among the data science community, as evidenced by companies building strategies and engineering paradigms that ensure machine learning integrity.

Gartner names Robust Intelligence a representative vendor

Gartner also recognizes the importance of managing AI risk and is doing its part to help leaders better understand and navigate AI-related challenges. The analyst firm recently published an update to their Market Guide for AI Trust, Risk and Security Management (TRiSM), which presents a framework comprised of four software categories: explainability/model monitoring, privacy, ModelOps, and AI application security.

Garnter AI TRiSM components

Robust Intelligence has been selected by Gartner analysts as a representative vendor in the explainability/model monitoring category. We’re honored to be included on this shortlist of vendors, however we take a fundamentally different approach to AI risk than other companies in the explainability or model monitoring space.

In a previous post, we discussed some of the gaps we see in model explainability. At its core, we believe that for AI to be used for critical business decisions, companies need to focus on creating better built-in systems for ensuring that their “black box” algorithmic decision systems outputs are robust and unbiased.

When it comes to model monitoring, there are significant issues with the approach vendors have taken, the most significant being that monitoring only surfaces performance issues once models have failed in production. This reactive approach is limited to a small set of KPIs to evaluate model output and does not help identify vulnerabilities in development. Model monitoring often leads to 2 AM firefighting calls, endless hours customizing internal tooling, and silent errors bringing risk to business and customers.

Rather than attempting to de-black box or reactively alert on select metrics, Robust Intelligence takes a proactive, test-based approach of continuous validation across the MLOps pipeline to instill end-to-end machine learning integrity.

The merits of continuous validation

ML model failure is ultimately a symptom of any number of issues including corrupted data, model drift, biased decisions, liabilities, software supply chain vulnerabilities, and adversarial input. These issues can typically be prevented proactively with the use of model testing conducted throughout development and production, not only after a model has been operationalized.

Robust Intelligence automatically runs hundreds of tests to measure a model’s robustness to performance, fairness, and security failure, the output of which provides guidance on how to optimize models across the lifecycle. We also stop bad data from entering production models with our AI Firewall. This combined strategy provides continuous validation of models and data, enabling data science teams to mitigate AI risk.

On the Gartner blog, Distinguished VP Analyst Avivah Litan writes “The AI TRiSM market is still new and fragmented, and most enterprises don’t apply TRiSM methodologies and tools until models are deployed. That’s shortsighted because building trustworthiness into models from the outset – during the design and development phase – will lead to better model performance.”

Continuous validation is already a software engineering best practice, and a similar approach should be applied to machine learning to increase velocity and reduce risk. By integrating automated tests into a CI/CD workflow, companies can instill integrity into their ML systems by identifying performance, fairness, and security issues early.

If you’d like to learn about how leading companies are working with Robust Intelligence to deliver ML integrity through continuous validation, we invite you to talk with an expert.

If you’re a Gartner client, you can access the Market Guide here.

Disclaimer: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Blog

Related articles

March 28, 2024
-
4
minute read

AI Governance Policy Roundup (March 2024)

For:
March 3, 2023
-
4
minute read

Effective AI Governance with Robust Intelligence

For:
September 9, 2021
-
4
minute read

Daniel Glogowski: How Military Service and Salesforce AI Shaped our Head of Product

For:
No items found.