May 31, 2024
-
4
minute read

AI Governance Policy Roundup (May 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

May 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

NIST released its “strategic vision for AI,” highlighting its focus on three key goals: (1) Advance the science of AI safety, (2) Articulate, demonstrate, and disseminate the practices of AI safety, and (3) Support institutions, communities, and coordination around AI safety.

Senate committee passed SB 1047 (AI Safety and Innovation Bill) which, “aims to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.” In addition, three other bills advanced specifically focused on safeguarding elections from AI. The bills would require labeling of AI-generated election ads and prohibit deepfakes of any federal candidates.

Colorado becomes the first state to pass a comprehensive “AI Act” (SB 24-205). The bill requires developers and deployers of “high-risk” AI systems to go through additional precautions to ensure they avoid discrimination and other safety harms. This new law, which will become a part of Colorado’s Consumer Protection Act, draws inspiration from the structure and method of the recently passed EU AI Act.

A bipartisan group of senators unveil AI Roadmap recommending $32 billion in spending on AI. The plan calls for new testing standards and transparency requirements, and the funding of the National AI Research Resource (NAIRR). It is, however, lacking in specific details, particularly the specifics on guardrails needed to prevent risk and mitigate harm.

International

EU Commission unveiled its new AI Office, which “aims at enabling the future development, deployment and use of AI in a way that fosters societal and economic benefits and innovation, while mitigating risks.” This will not only be for the purposes overseeing the implementation of the EU AI Act, but also house units for AI Safety, Excellence in AI and Robotics, AI for Societal Good, and AI Innovation and Policy Coordination.

Italy has launched a new policy initiative for trustworthy AI and drafted a new AI law, aiming to create “(1) general principles for the development and use of AI systems and models; (2) sectorial provisions, particularly in the healthcare sector and for scientific research for healthcare; (3) rules on the national strategy on AI and governance, including designating the national competent authorities in accordance with the EU AI Act; and (4) amendments to copyright law.”

The UK and Canada signed an agreement to work closely together on AI safety. As part of the agreement, the two countries aim to share expertise to enhance evaluation and testing work, “inspire collaborative work on systemic safety research,” and will grow the network of AI safety institutes following the first AI Safety Summit in Bletchley.

A second Safety Summit was hosted in Seoul, Korea this month, successfully securing safety commitments from sixteen companies at the forefront of AI development; “They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated, and to ensure governance and transparency.” Despite the welcome reception of these voluntary commitments, there was a simultaneous push to have regulation accompany them.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure AI applications from security and safety vulnerabilities. Our platform evaluates models for vulnerabilities throughout the AI lifecycle and protects applications in real time. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

May 31, 2024
-
4
minute read

AI Governance Policy Roundup (May 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

May 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

NIST released its “strategic vision for AI,” highlighting its focus on three key goals: (1) Advance the science of AI safety, (2) Articulate, demonstrate, and disseminate the practices of AI safety, and (3) Support institutions, communities, and coordination around AI safety.

Senate committee passed SB 1047 (AI Safety and Innovation Bill) which, “aims to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.” In addition, three other bills advanced specifically focused on safeguarding elections from AI. The bills would require labeling of AI-generated election ads and prohibit deepfakes of any federal candidates.

Colorado becomes the first state to pass a comprehensive “AI Act” (SB 24-205). The bill requires developers and deployers of “high-risk” AI systems to go through additional precautions to ensure they avoid discrimination and other safety harms. This new law, which will become a part of Colorado’s Consumer Protection Act, draws inspiration from the structure and method of the recently passed EU AI Act.

A bipartisan group of senators unveil AI Roadmap recommending $32 billion in spending on AI. The plan calls for new testing standards and transparency requirements, and the funding of the National AI Research Resource (NAIRR). It is, however, lacking in specific details, particularly the specifics on guardrails needed to prevent risk and mitigate harm.

International

EU Commission unveiled its new AI Office, which “aims at enabling the future development, deployment and use of AI in a way that fosters societal and economic benefits and innovation, while mitigating risks.” This will not only be for the purposes overseeing the implementation of the EU AI Act, but also house units for AI Safety, Excellence in AI and Robotics, AI for Societal Good, and AI Innovation and Policy Coordination.

Italy has launched a new policy initiative for trustworthy AI and drafted a new AI law, aiming to create “(1) general principles for the development and use of AI systems and models; (2) sectorial provisions, particularly in the healthcare sector and for scientific research for healthcare; (3) rules on the national strategy on AI and governance, including designating the national competent authorities in accordance with the EU AI Act; and (4) amendments to copyright law.”

The UK and Canada signed an agreement to work closely together on AI safety. As part of the agreement, the two countries aim to share expertise to enhance evaluation and testing work, “inspire collaborative work on systemic safety research,” and will grow the network of AI safety institutes following the first AI Safety Summit in Bletchley.

A second Safety Summit was hosted in Seoul, Korea this month, successfully securing safety commitments from sixteen companies at the forefront of AI development; “They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated, and to ensure governance and transparency.” Despite the welcome reception of these voluntary commitments, there was a simultaneous push to have regulation accompany them.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure AI applications from security and safety vulnerabilities. Our platform evaluates models for vulnerabilities throughout the AI lifecycle and protects applications in real time. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

January 16, 2024
-
5
minute read

AI Security Insights from Hackers on the Hill

For:
June 20, 2023
-
5
minute read

Why We Need Risk Assessments for Generative AI

For:
Model Compliance Assessment
August 26, 2021
-
3
minute read

Kye Kim: An Interdisciplinary Track from Stanford to RI

For:
May 29, 2024
-
5
minute read

AI Cyber Threat Intelligence Roundup: May 2024

For:
May 28, 2024
-
5
minute read

Fine-Tuning LLMs Breaks Their Safety and Security Alignment

For:
January 16, 2024
-
5
minute read

AI Security Insights from Hackers on the Hill

For: