September 6, 2024
-
4
minute read

AI Governance Policy Roundup (August 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

August 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

California’s new AI bill (SB 1047, which has garnered a lot of public attention) has passed through the Committee and is now waiting for the Governor’s signature. The bill proposes mandatory safety testing for the most advanced AI models and various mechanisms for government intervention with non-compliance.

NIST has sent out a Request for Comments on on the U.S. Artificial Intelligence Safety Institute's Draft Document: Managing Misuse Risk for Dual-Use Foundation Models (due September 9th). This is responsive to an Executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI) mandate.

Colorado is now the first US state to have a comprehensive AI law (going into effect February 1, 2026). The law has a clear focus on mitigating algorithmic discrimination, enhancing model transparency mechanisms, and having robust risk management protocols in place.

CISA has elected a new Chief AI Officer, Lisa Einstein. This sends a strong signal for how seriously the country’s “top cybersecurity officials view both artificial intelligence tools' opportunities and threats.” In order to fulfill a key requirement of Biden’s AI executive order, several federal agencies have been appointing new chief AI officers. CISA, one of the few agencies that didn't face this particular requirement, has appointed this senior-level position regardless.

The White House has taken a clear stance in favor of open-source artificial intelligence via the, “Dual-Use Foundation Models with Widely Available Model Weights Report,” put forth by the National Telecommunications and Information Administration (NTIA). “NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models. Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”

Various leading AI companies are facing copyright litigation as they approach ‘data frontier.’ As an example, Anthropic is facing legal action by a group of authors claiming "never sought — let alone paid for — a license to copy and exploit the protected expression contained in the copyrighted works fed into its models.” This is in addition to the prominent New York Times lawsuit against OpenAI and Microsoft claiming they are profiting from copyrighted material.

International

EU AI Act officially entered into force as of August 1st, meaning Europe is now enforcing the world's first AI law. This law outlines regulations on AI development, deployment, and use, imposing stricter rules on high-risk AI systems and banning "unacceptable" AI applications, with penalties for non-compliance.

The Australian Government released a new policy for the responsible use of AI in government. This is to ensure that government plays a “leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations.” The policy is mandatory for non-corporate Commonwealth entities and takes effect on September 1st.

Argentinian Data protection agency filed a complaint against Meta AI regarding the use of private and personal information from users to train its AI systems. Argentina is joining a host of other countries and regions – Brazil, Nigeria, and the EU – questioning Meta’s data practices.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

September 6, 2024
-
4
minute read

AI Governance Policy Roundup (August 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

August 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

California’s new AI bill (SB 1047, which has garnered a lot of public attention) has passed through the Committee and is now waiting for the Governor’s signature. The bill proposes mandatory safety testing for the most advanced AI models and various mechanisms for government intervention with non-compliance.

NIST has sent out a Request for Comments on on the U.S. Artificial Intelligence Safety Institute's Draft Document: Managing Misuse Risk for Dual-Use Foundation Models (due September 9th). This is responsive to an Executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI) mandate.

Colorado is now the first US state to have a comprehensive AI law (going into effect February 1, 2026). The law has a clear focus on mitigating algorithmic discrimination, enhancing model transparency mechanisms, and having robust risk management protocols in place.

CISA has elected a new Chief AI Officer, Lisa Einstein. This sends a strong signal for how seriously the country’s “top cybersecurity officials view both artificial intelligence tools' opportunities and threats.” In order to fulfill a key requirement of Biden’s AI executive order, several federal agencies have been appointing new chief AI officers. CISA, one of the few agencies that didn't face this particular requirement, has appointed this senior-level position regardless.

The White House has taken a clear stance in favor of open-source artificial intelligence via the, “Dual-Use Foundation Models with Widely Available Model Weights Report,” put forth by the National Telecommunications and Information Administration (NTIA). “NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models. Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”

Various leading AI companies are facing copyright litigation as they approach ‘data frontier.’ As an example, Anthropic is facing legal action by a group of authors claiming "never sought — let alone paid for — a license to copy and exploit the protected expression contained in the copyrighted works fed into its models.” This is in addition to the prominent New York Times lawsuit against OpenAI and Microsoft claiming they are profiting from copyrighted material.

International

EU AI Act officially entered into force as of August 1st, meaning Europe is now enforcing the world's first AI law. This law outlines regulations on AI development, deployment, and use, imposing stricter rules on high-risk AI systems and banning "unacceptable" AI applications, with penalties for non-compliance.

The Australian Government released a new policy for the responsible use of AI in government. This is to ensure that government plays a “leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations.” The policy is mandatory for non-corporate Commonwealth entities and takes effect on September 1st.

Argentinian Data protection agency filed a complaint against Meta AI regarding the use of private and personal information from users to train its AI systems. Argentina is joining a host of other countries and regions – Brazil, Nigeria, and the EU – questioning Meta’s data practices.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

January 26, 2023
-
5
minute read

A Guide to the NIST AI Risk Management Framework

For:
Compliance Teams
July 31, 2024
-
4
minute read

AI Governance Policy Roundup (July 2024)

For:
September 9, 2021
-
4
minute read

Daniel Glogowski: How Military Service and Salesforce AI Shaped our Head of Product

For:
No items found.