December 21, 2023
-
4
minute read

AI Governance Policy Roundup (December 2023)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

December 2023 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

Another bipartisan bill introduced in Congress (the third since the EO on AI was released): “Artificial Intelligence (AI) Research, Innovation, and Accountability Act of 2023.” This bill establishes a framework to support transparency, accountability, and security for the highest-impact applications of AI. In particular requiring risk management assessment and reporting consistent with the structure of NIST’s AI Risk Management Framework. This reporting requirement would provide a comprehensive, detailed outline of how the organizations manage, mitigate, and understand risk. Deployers of “high-impact” AI systems would be required to submit transparency reports to the Commerce Department.

Continued progress on AI regulation at the state-level:

  • California released draft regulation on automated decision making technology (setting the stage for many more states), heavily focused on consumer protections related to the profiling of consumers.
  • Colorado’s AI regulation for life insurance companies recently came into effect, requiring AI risk management frameworks be put in place and reporting to verify that AI models used in life insurance practices do not result in unfairly discriminatory predictive modeling with respect to race.

NIST’s Public Working Group on Generative AI is nearing the finish line in creating a companion resource for the AI Risk Management Framework (that was initially released at the beginning of this year) to account for new and exacerbated risks introduced by generative AI. This is set to be released in the near term to help businesses apply the AI RMF to generative AI use cases.

International

The EU Parliament, EU Council, and EU Commission, reached a provisional agreement on the The EU AI Act. This landmark legislation is in its final stages of preparation, only some final technical details and consolidation of the text remain. The AI Act will have implications on businesses developing and deploying AI; transparency requirements, investment in risk management tools, mandatory documentation and record-keeping, and continued need for human oversight and intervention.

France, Germany, and Italy reached their own agreement on AI regulation in a joint paper, which seems to conflict with the core principle of the EU AI Act: “Together we underline that the AI Act regulates the application of AI and not the technology as such. The inherent risks lie in the application of AI systems rather than in the technology itself.” In a paper they support "mandatory self-regulation through codes of conduct.”

Some other news

New joint guidelines released from CISA and UK NCSC on “Secure AI System Development,” which help align US and UK AI security standards.

ISO just released a new standard on AI system life cycle processes (seemingly aligned with much of the requirements of the EU AI Act) including performance measurement, assessments of AI’s impact on society and individuals, and mandating conformity to requirements and systematic audits to assess AI systems.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

December 21, 2023
-
4
minute read

AI Governance Policy Roundup (December 2023)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

December 2023 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

Another bipartisan bill introduced in Congress (the third since the EO on AI was released): “Artificial Intelligence (AI) Research, Innovation, and Accountability Act of 2023.” This bill establishes a framework to support transparency, accountability, and security for the highest-impact applications of AI. In particular requiring risk management assessment and reporting consistent with the structure of NIST’s AI Risk Management Framework. This reporting requirement would provide a comprehensive, detailed outline of how the organizations manage, mitigate, and understand risk. Deployers of “high-impact” AI systems would be required to submit transparency reports to the Commerce Department.

Continued progress on AI regulation at the state-level:

  • California released draft regulation on automated decision making technology (setting the stage for many more states), heavily focused on consumer protections related to the profiling of consumers.
  • Colorado’s AI regulation for life insurance companies recently came into effect, requiring AI risk management frameworks be put in place and reporting to verify that AI models used in life insurance practices do not result in unfairly discriminatory predictive modeling with respect to race.

NIST’s Public Working Group on Generative AI is nearing the finish line in creating a companion resource for the AI Risk Management Framework (that was initially released at the beginning of this year) to account for new and exacerbated risks introduced by generative AI. This is set to be released in the near term to help businesses apply the AI RMF to generative AI use cases.

International

The EU Parliament, EU Council, and EU Commission, reached a provisional agreement on the The EU AI Act. This landmark legislation is in its final stages of preparation, only some final technical details and consolidation of the text remain. The AI Act will have implications on businesses developing and deploying AI; transparency requirements, investment in risk management tools, mandatory documentation and record-keeping, and continued need for human oversight and intervention.

France, Germany, and Italy reached their own agreement on AI regulation in a joint paper, which seems to conflict with the core principle of the EU AI Act: “Together we underline that the AI Act regulates the application of AI and not the technology as such. The inherent risks lie in the application of AI systems rather than in the technology itself.” In a paper they support "mandatory self-regulation through codes of conduct.”

Some other news

New joint guidelines released from CISA and UK NCSC on “Secure AI System Development,” which help align US and UK AI security standards.

ISO just released a new standard on AI system life cycle processes (seemingly aligned with much of the requirements of the EU AI Act) including performance measurement, assessments of AI’s impact on society and individuals, and mandating conformity to requirements and systematic audits to assess AI systems.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

June 9, 2023
-
7
minute read

NeMo Guardrails Early Look: What You Need to Know Before Deploying (Part 2)

For:
February 16, 2023
-
3
minute read

Fairness and Bias Testing with Robust Intelligence

For:
March 31, 2022
-
5
minute read

How RIME Could Have Prevented the Age of Ultron

For:
November 14, 2023
-
4
minute read

AI Governance Policy Roundup (November 2023)

For:
October 30, 2023
-
5
minute read

The White House Executive Order on AI: Assessing AI Risk with Automated Testing

For:
July 24, 2023
-
6
minute read

Leading AI Companies Commit to AI Risk Management: What the White House Agreement Means for Enterprises

For: