May 4, 2023
-
4
minute read

New Capabilities to Stay Ahead of AI Risk

Product Updates

As the first end-to-end solution to protect organizations from AI risk, Robust Intelligence is helping companies implement a new paradigm necessary to instill integrity in their AI systems. Our platform continuously validates models and data across the AI lifecycle using hundreds of automated tests to proactively identify and mitigate the three categories of AI risk:

The evolution of AI risk

Our co-founders started Robust Intelligence in 2019 after more than a decade of robust machine learning research with the mission to eliminate AI risk. In partnership with our amazing customers and partners, our team has continued to innovate and deliver unmatched value. The last several years have seen incredible gains in AI technology and adoption, which has been further amplified by recent advancements in generative AI and large language models (LLMs). The world of AI, and in turn the risks associated with it, is evolving and being implemented at an exponential rate.

Several recent developments in AI have increased the level of unmanaged risk facing organizations. Business leaders recognize that they need a standard for AI risk prevention by instilling integrity across all of their models. The following factors have reframed AI Integrity as a “must have” rather than a “nice to have:”

  1. New risk from AI advancements. Recent advancements in AI (such as generative AI and other foundation models) have made it harder for organizations to understand exactly what they’re deploying. For example, assessing predictions of a ChatGPT model versus a regression model is significantly more complicated. As machine learning models become more complex and integrated into various industries, it is crucial to ensure that they perform appropriately in production.
  2. The proliferation of third-party models. Commercial and open-source tools have made sophisticated models widely accessible to companies. Gone are the days of teams of PhDs developing such proprietary models in-house. Companies now compile models built on top of public models, which in turn were built using common libraries and training data, making it exceptionally difficult to tell what has gone into creating the software. It also subjects organizations to AI supply chain risk.
  3. Rapidly evolving AI regulation. In response to the widespread adoption of AI, federal and state agencies have begun to examine if existing regulations already cover AI and machine learning applications. Many are in the process of formulating new guidelines. Legislators in the US, UK, and EU are also considering new laws intended to ensure public safety in the era of AI. Given the EEOC’s increased focus on AI bias and the recent civil rights class action lawsuit filed against Workday, many other public accusations are sure to come.

With these growing and pressing complexities and concerns around AI advancements and regulation, Robust Intelligence continues to expand the capabilities of our product to mitigate new risks to ensure the safe implementation and use of AI.

How we can help

Robust Intelligence continues to push the boundaries of what our product offers in order to help companies meet the growing demands of these new advancements and an ever increasing exposure to AI risk.

The following capabilities highlight how our product addresses today’s most pressing AI challenges, enabling companies to confidently leverage the latest in AI while protecting against the associated risk.

Third-party models: The rise in commercial and open-source models, including generative AI and LLMs, makes sophisticated AI widely available but also necessitates that organizations de-risk these models before use. Robust Intelligence brings continuous validation to third-party models to ensure their safe use.

  1. Generative AI model testing: While traditional ML models have their own set of risks, the introduction of generative AI and LLMs bring forth new challenges that must be addressed. Organizations must ensure that they prevent incorrect and toxic outputs, detect and remediate security vulnerabilities, and ensure that models meet regulatory standards. Though these risks are novel, they are addressed by Robust Intelligence’s approach to security, ethical, and operational AI risk prevention.
  2. AI Risk Database. Repositories like Hugging Face and PyTorch Hub have made sophisticated models widely accessible to organizations; however, it's incredibly difficult to assess any given model for security, ethical, and operational risks. To that end, we released a free, community-supported resource to evaluate AI supply chain risk in public models. The database includes comprehensive test results and corresponding risk scores for over 170,000 models, as well as a growing number of vulnerability reports submitted by AI and cybersecurity researchers.

Additional AI governance features: Due to the growing risks associated with new developments in AI systems and increasing regulatory oversight, Robust Intelligence is continuing to develop our governance features to help organizations align with new demands in AI system governance.

  1. Our governance dashboard provides a high level view of the status of all of an organization’s AI models in production, allowing projects to be tracked against key metrics, providing a central window into all model activity. Health status and variance across time of business metrics are available for models in production to help evaluate model risk against business risk. This provides managers and executives with easily digestible information about each of their models and their associated risks.
  2. Auto-generated documentation, in addition to being useful for regulatory reporting, is useful for internal reporting needs, communication between data scientists and business operations, and in order to have a documented change log of decisions made about the AI system and its behavior.

Regulatory alignment: The rapidly evolving landscape of AI regulation can be difficult for organizations to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance. Robust Intelligence helps companies stay up-to-date on the latest regulatory developments and ahead of requirements by providing:

  1. Policy checklists to help outline the requirements and steps to being compliant or aligning with regulatory frameworks and guidelines (for example, the NIST AI Risk Management Framework).
  2. Mappings of our testing framework to industry specific regulation to help customers streamline compliance with regulation.
  3. Model card builder to automate model documentation and reporting in order to meet auditing and regulatory reporting requirements (for example, the NYC Local Law 144 bias audit requirement).

To learn more about how our capabilities help to instill integrity in AI systems, request a product demo here.

May 4, 2023
-
4
minute read

New Capabilities to Stay Ahead of AI Risk

Product Updates

As the first end-to-end solution to protect organizations from AI risk, Robust Intelligence is helping companies implement a new paradigm necessary to instill integrity in their AI systems. Our platform continuously validates models and data across the AI lifecycle using hundreds of automated tests to proactively identify and mitigate the three categories of AI risk:

The evolution of AI risk

Our co-founders started Robust Intelligence in 2019 after more than a decade of robust machine learning research with the mission to eliminate AI risk. In partnership with our amazing customers and partners, our team has continued to innovate and deliver unmatched value. The last several years have seen incredible gains in AI technology and adoption, which has been further amplified by recent advancements in generative AI and large language models (LLMs). The world of AI, and in turn the risks associated with it, is evolving and being implemented at an exponential rate.

Several recent developments in AI have increased the level of unmanaged risk facing organizations. Business leaders recognize that they need a standard for AI risk prevention by instilling integrity across all of their models. The following factors have reframed AI Integrity as a “must have” rather than a “nice to have:”

  1. New risk from AI advancements. Recent advancements in AI (such as generative AI and other foundation models) have made it harder for organizations to understand exactly what they’re deploying. For example, assessing predictions of a ChatGPT model versus a regression model is significantly more complicated. As machine learning models become more complex and integrated into various industries, it is crucial to ensure that they perform appropriately in production.
  2. The proliferation of third-party models. Commercial and open-source tools have made sophisticated models widely accessible to companies. Gone are the days of teams of PhDs developing such proprietary models in-house. Companies now compile models built on top of public models, which in turn were built using common libraries and training data, making it exceptionally difficult to tell what has gone into creating the software. It also subjects organizations to AI supply chain risk.
  3. Rapidly evolving AI regulation. In response to the widespread adoption of AI, federal and state agencies have begun to examine if existing regulations already cover AI and machine learning applications. Many are in the process of formulating new guidelines. Legislators in the US, UK, and EU are also considering new laws intended to ensure public safety in the era of AI. Given the EEOC’s increased focus on AI bias and the recent civil rights class action lawsuit filed against Workday, many other public accusations are sure to come.

With these growing and pressing complexities and concerns around AI advancements and regulation, Robust Intelligence continues to expand the capabilities of our product to mitigate new risks to ensure the safe implementation and use of AI.

How we can help

Robust Intelligence continues to push the boundaries of what our product offers in order to help companies meet the growing demands of these new advancements and an ever increasing exposure to AI risk.

The following capabilities highlight how our product addresses today’s most pressing AI challenges, enabling companies to confidently leverage the latest in AI while protecting against the associated risk.

Third-party models: The rise in commercial and open-source models, including generative AI and LLMs, makes sophisticated AI widely available but also necessitates that organizations de-risk these models before use. Robust Intelligence brings continuous validation to third-party models to ensure their safe use.

  1. Generative AI model testing: While traditional ML models have their own set of risks, the introduction of generative AI and LLMs bring forth new challenges that must be addressed. Organizations must ensure that they prevent incorrect and toxic outputs, detect and remediate security vulnerabilities, and ensure that models meet regulatory standards. Though these risks are novel, they are addressed by Robust Intelligence’s approach to security, ethical, and operational AI risk prevention.
  2. AI Risk Database. Repositories like Hugging Face and PyTorch Hub have made sophisticated models widely accessible to organizations; however, it's incredibly difficult to assess any given model for security, ethical, and operational risks. To that end, we released a free, community-supported resource to evaluate AI supply chain risk in public models. The database includes comprehensive test results and corresponding risk scores for over 170,000 models, as well as a growing number of vulnerability reports submitted by AI and cybersecurity researchers.

Additional AI governance features: Due to the growing risks associated with new developments in AI systems and increasing regulatory oversight, Robust Intelligence is continuing to develop our governance features to help organizations align with new demands in AI system governance.

  1. Our governance dashboard provides a high level view of the status of all of an organization’s AI models in production, allowing projects to be tracked against key metrics, providing a central window into all model activity. Health status and variance across time of business metrics are available for models in production to help evaluate model risk against business risk. This provides managers and executives with easily digestible information about each of their models and their associated risks.
  2. Auto-generated documentation, in addition to being useful for regulatory reporting, is useful for internal reporting needs, communication between data scientists and business operations, and in order to have a documented change log of decisions made about the AI system and its behavior.

Regulatory alignment: The rapidly evolving landscape of AI regulation can be difficult for organizations to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance. Robust Intelligence helps companies stay up-to-date on the latest regulatory developments and ahead of requirements by providing:

  1. Policy checklists to help outline the requirements and steps to being compliant or aligning with regulatory frameworks and guidelines (for example, the NIST AI Risk Management Framework).
  2. Mappings of our testing framework to industry specific regulation to help customers streamline compliance with regulation.
  3. Model card builder to automate model documentation and reporting in order to meet auditing and regulatory reporting requirements (for example, the NYC Local Law 144 bias audit requirement).

To learn more about how our capabilities help to instill integrity in AI systems, request a product demo here.

Blog

Related articles

November 1, 2021
-
3
minute read

IWI Uses RIME to Help Secure the Japanese Online Payments Market

For:
March 3, 2023
-
4
minute read

Effective AI Governance with Robust Intelligence

For:
April 2, 2024
-
3
minute read

Robust Intelligence Partners with CrowdStrike to Bring Our Real-Time AI Security Telemetry to Falcon LogScale

For:
No items found.