November 14, 2023
-
4
minute read

AI Governance Policy Roundup (November 2023)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We’re installing a new monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

November 2023 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

The Biden-Harris Administration signed a new executive order on October 30th advancing a federal government-wide approach to governing AI risk. This will substantially change the way AI is used across all sectors of the government and will have a ripple effect on private sector. Below we highlight some of the key elements of this new executive order.

  • Requiring developers of large AI systems to share safety test results (results from red-teaming) for high-risk applications (enforced under the Defense Production Act)
  • Mandates for agencies across the federal government to create standards for testing and reporting on AI models, including a mandate for NIST to develop AI red team guidelines as part of a new United States AI Safety Institute

However, the new executive order on AI and NIST’s voluntary risk management framework cannot standalone. Forthcoming legislation proposed in Congress will ultimately be required in order to ensure the safe, secure, and trustworthy use and development of AI (note: executive orders are subject to be overturned as presidential administrations change). We are seeing this activity happening in Congress already:

NIST’s Public Working Group on Generative AI is diligently moving forward in creating a companion of the AI Risk Management Framework (that was initially released at the beginning of this year) to account for new and exacerbated risks introduced by generative AI. This is set to be released in the near term to help businesses apply the AI RMF to generative AI use cases.

International

UK AI Safety Summit Bletchley Declaration, held on November 1st and 2nd, brought together private and public sector thought-leaders and policymakers. The result was a joint commitment from 28 governments and leading AI companies to subject advanced AI models to undergo safety tests before released to the public. A new UK AI Safety Institute was also announced, analogous to the one introduced in the US in the executive order.

G7 leaders (Canada, France, Germany, Italy, Japan, United Kingdom, and United States) and the European Union signed on to an AI Code of Conduct, known as the “Hiroshima AI Process”. This 11-principle document "aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems” and will serve to unite global approaches on AI governance.

The EU AI Act was first introduced into the European Commission in 2021, but is this month’s news because it is getting closer and closer to finalization. The Act is now in its final stages of trilogue negotiations (sparking some debate about the recently proposed approach to foundation models) between Member States, Parliament, and the Commission, marking the last stage of the legislative process for this first-of-its-kind comprehensive AI law (expected to come into force at the end of 2023 or early 2024). The EU has been a fast-mover with establishing prescriptive rules around AI from a risk-based approach.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

November 14, 2023
-
4
minute read

AI Governance Policy Roundup (November 2023)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We’re installing a new monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

November 2023 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

The Biden-Harris Administration signed a new executive order on October 30th advancing a federal government-wide approach to governing AI risk. This will substantially change the way AI is used across all sectors of the government and will have a ripple effect on private sector. Below we highlight some of the key elements of this new executive order.

  • Requiring developers of large AI systems to share safety test results (results from red-teaming) for high-risk applications (enforced under the Defense Production Act)
  • Mandates for agencies across the federal government to create standards for testing and reporting on AI models, including a mandate for NIST to develop AI red team guidelines as part of a new United States AI Safety Institute

However, the new executive order on AI and NIST’s voluntary risk management framework cannot standalone. Forthcoming legislation proposed in Congress will ultimately be required in order to ensure the safe, secure, and trustworthy use and development of AI (note: executive orders are subject to be overturned as presidential administrations change). We are seeing this activity happening in Congress already:

NIST’s Public Working Group on Generative AI is diligently moving forward in creating a companion of the AI Risk Management Framework (that was initially released at the beginning of this year) to account for new and exacerbated risks introduced by generative AI. This is set to be released in the near term to help businesses apply the AI RMF to generative AI use cases.

International

UK AI Safety Summit Bletchley Declaration, held on November 1st and 2nd, brought together private and public sector thought-leaders and policymakers. The result was a joint commitment from 28 governments and leading AI companies to subject advanced AI models to undergo safety tests before released to the public. A new UK AI Safety Institute was also announced, analogous to the one introduced in the US in the executive order.

G7 leaders (Canada, France, Germany, Italy, Japan, United Kingdom, and United States) and the European Union signed on to an AI Code of Conduct, known as the “Hiroshima AI Process”. This 11-principle document "aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems” and will serve to unite global approaches on AI governance.

The EU AI Act was first introduced into the European Commission in 2021, but is this month’s news because it is getting closer and closer to finalization. The Act is now in its final stages of trilogue negotiations (sparking some debate about the recently proposed approach to foundation models) between Member States, Parliament, and the Commission, marking the last stage of the legislative process for this first-of-its-kind comprehensive AI law (expected to come into force at the end of 2023 or early 2024). The EU has been a fast-mover with establishing prescriptive rules around AI from a risk-based approach.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

February 3, 2022
-
3
minute read

Introducing our Incredible ML Team!

For:
December 8, 2021
-
3
minute read

Announcing Robust Intelligence's $30M Series B

For:
March 28, 2024
-
4
minute read

AI Governance Policy Roundup (March 2024)

For:
October 30, 2023
-
5
minute read

The White House Executive Order on AI: Assessing AI Risk with Automated Testing

For:
July 24, 2023
-
6
minute read

Leading AI Companies Commit to AI Risk Management: What the White House Agreement Means for Enterprises

For:
January 26, 2023
-
5
minute read

A Guide to the NIST AI Risk Management Framework

For:
Compliance Teams