July 31, 2024
-
4
minute read

AI Governance Policy Roundup (July 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

July 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

At the 270-day mark since the release of President Biden’s AI Executive Order, NIST released new guidance and tools from the US AI Safety Institute. This includes a new profile Generative AI Profile for the AI Risk Management Framework and guidance on Managing Misuse Risk for Dual-Use Foundation Models.

Newly released bill, Validation and Evaluation for Trustworthy AI (VET AI Act), directs NIST to work with federal agencies, academia, and civil society to create voluntary guidelines for third party audits of AI.

There is continued controversy surrounding California’s AI Safety and Innovation Bill (SB 1047). Anthropic weighed in on the bill proposing to “refocus the bill on frontier AI safety.” Senator Wiener continues to defend the bill to tech founders and investors, stating he believes the bill is “a light touch bill.”

The Cybersecurity & Infrastructure Security Agency (CISA) hosted a tabletop exercise with over 50 AI experts from government and the private sector. This effort was to (1) further U.S. preparedness around AI incident reporting and (2) support the development of an AI Security Incident Collaboration Playbook. The event was coordinated by the Joint Cyber Defense Collaborative (JCDC.AI)— a public-private partnership cohort within CISA– to “[simulate] a cybersecurity incident involving an AI-enabled system and participants worked through operational collaboration and information sharing protocols for incident response across the represented organizations.”

International

EU AI Act finalized legal text was published in the Official Journal of the European Union earlier this month. The AI Act comes into effect August 1, which will be followed by a cascading timeline of requirements enforcement. Notably, prohibited AI systems will be banned 6 months after the AI Act officially comes into effect.

Leading AI developers are delaying or avoiding release of new AI features in EU due to regulatory concerns. For example, Meta not releasing multimodal Llama AI model in the EU because of “unpredictable nature of European regulatory environment.” Additionally, Apple delaying AI features (namely Apple Intelligence) in the EU because of Digital Markets Act.

In addition to suspensions in the EU, Meta has suspended all its generative AI tools in Brazil due to the government's (Brazil's National Data Protection Authority) objections to its new privacy policy regarding personal data and AI.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

July 31, 2024
-
4
minute read

AI Governance Policy Roundup (July 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

July 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

At the 270-day mark since the release of President Biden’s AI Executive Order, NIST released new guidance and tools from the US AI Safety Institute. This includes a new profile Generative AI Profile for the AI Risk Management Framework and guidance on Managing Misuse Risk for Dual-Use Foundation Models.

Newly released bill, Validation and Evaluation for Trustworthy AI (VET AI Act), directs NIST to work with federal agencies, academia, and civil society to create voluntary guidelines for third party audits of AI.

There is continued controversy surrounding California’s AI Safety and Innovation Bill (SB 1047). Anthropic weighed in on the bill proposing to “refocus the bill on frontier AI safety.” Senator Wiener continues to defend the bill to tech founders and investors, stating he believes the bill is “a light touch bill.”

The Cybersecurity & Infrastructure Security Agency (CISA) hosted a tabletop exercise with over 50 AI experts from government and the private sector. This effort was to (1) further U.S. preparedness around AI incident reporting and (2) support the development of an AI Security Incident Collaboration Playbook. The event was coordinated by the Joint Cyber Defense Collaborative (JCDC.AI)— a public-private partnership cohort within CISA– to “[simulate] a cybersecurity incident involving an AI-enabled system and participants worked through operational collaboration and information sharing protocols for incident response across the represented organizations.”

International

EU AI Act finalized legal text was published in the Official Journal of the European Union earlier this month. The AI Act comes into effect August 1, which will be followed by a cascading timeline of requirements enforcement. Notably, prohibited AI systems will be banned 6 months after the AI Act officially comes into effect.

Leading AI developers are delaying or avoiding release of new AI features in EU due to regulatory concerns. For example, Meta not releasing multimodal Llama AI model in the EU because of “unpredictable nature of European regulatory environment.” Additionally, Apple delaying AI features (namely Apple Intelligence) in the EU because of Digital Markets Act.

In addition to suspensions in the EU, Meta has suspended all its generative AI tools in Brazil due to the government's (Brazil's National Data Protection Authority) objections to its new privacy policy regarding personal data and AI.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

April 29, 2024
-
5
minute read

AI Governance Policy Roundup (April 2024)

For:
May 29, 2024
-
5
minute read

AI Cyber Threat Intelligence Roundup: May 2024

For:
June 7, 2022
-
4
minute read

Avoiding Risk in Computer Vision Models

For:
No items found.