March 28, 2024
-
4
minute read

AI Governance Policy Roundup (March 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI security and safety platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

March 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

The U.S. Department of Homeland Security is becoming the first federal agency to incorporate generative AI models across a wide range of its divisions, piloting AI programs to help combat drug and human trafficking crimes, emergency management preparation, and training immigrations officials. The Department promises that its “latest efforts follow President Biden’s Executive Order (EO) ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’.”

The U.S. Department of the Treasury released a report on managing AI-specific cybersecurity risks in the financial services sector. The report was a part of President Biden’s mandates from the executive order on AI last fall. In the report, the Treasury identifies “significant opportunities and challenges that AI presents to the security and resiliency of the financial services sector.”

Utah AI Policy Act bill (SB0149) was signed on March 13th and will come into effect May 1st, 2024. This new piece of legislation is part of Utah’s consumer protection laws and introduces disclosure obligations for the use of AI systems in both the private and public sectors. In addition, it establishes the Office of AI Policy and the AI Learning Laboratory Program, with the potential to establish cybersecurity auditing procedures for higher risk AI applications.

International

This month the EU AI Act was approved by the EU Parliament, marking the last stage of voting before publishing the finalized text. The AI Act will become law once member states sign off, which is typically a formality. Next attention and resources will be put towards implementation of the Act, which is likely to be in stages from 2025 onwards. Below is a brief summary of the upcoming timeline for EU AI Act law implementation once the text enters into force:

  • 6 months: bans on AI applications with “unacceptable risk” go into effect
  • 9 months: regulators establish “code of practice” for AI models
  • 12 months: law applies to “general purpose AI models” (non high-risk)
  • 36 months: obligations on “high risk” AI models go into effect

There is a push in Africa to start regulating AI, as the use of AI systems has been expanding across the continent. The African Union, including 55 member nations, is preparing an AI policy to further develop and regulate the use of AI. However, there has been early debate about whether regulation is warranted and the impact it might have on innovation. Seven African nations have already developed national AI policies and towards the end of February the African Union Development Agency published a policy draft to serve as the blueprint of AI regulations for African nations.

The UN unanimously adopted a US-led resolution on AI technologies. The resolution aims to lay out a comprehensive vision for “safe, secure, and trustworthy AI” and is based on the voluntary commitments put forth by the White House in partnership with leading AI companies last fall. This marks a critical step towards establishing international agreement on guardrails for the ethical and sustainable development of AI. At its core it encourages protecting personal data, monitoring AI for risks, and safe-guarding human rights.

The Indian IT Ministry has issued a new advisory that asks tech firms to seek “government approval” before releasing AI models that are “unreliable” or still in trial phase. The ministry stated “availability to the users on Indian Internet must be done so with explicit permission of the Government of India.”

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure AI applications from security and safety vulnerabilities. Our platform evaluates models for vulnerabilities throughout the AI lifecycle and protects applications in real time. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

March 28, 2024
-
4
minute read

AI Governance Policy Roundup (March 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI security and safety platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

March 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

The U.S. Department of Homeland Security is becoming the first federal agency to incorporate generative AI models across a wide range of its divisions, piloting AI programs to help combat drug and human trafficking crimes, emergency management preparation, and training immigrations officials. The Department promises that its “latest efforts follow President Biden’s Executive Order (EO) ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’.”

The U.S. Department of the Treasury released a report on managing AI-specific cybersecurity risks in the financial services sector. The report was a part of President Biden’s mandates from the executive order on AI last fall. In the report, the Treasury identifies “significant opportunities and challenges that AI presents to the security and resiliency of the financial services sector.”

Utah AI Policy Act bill (SB0149) was signed on March 13th and will come into effect May 1st, 2024. This new piece of legislation is part of Utah’s consumer protection laws and introduces disclosure obligations for the use of AI systems in both the private and public sectors. In addition, it establishes the Office of AI Policy and the AI Learning Laboratory Program, with the potential to establish cybersecurity auditing procedures for higher risk AI applications.

International

This month the EU AI Act was approved by the EU Parliament, marking the last stage of voting before publishing the finalized text. The AI Act will become law once member states sign off, which is typically a formality. Next attention and resources will be put towards implementation of the Act, which is likely to be in stages from 2025 onwards. Below is a brief summary of the upcoming timeline for EU AI Act law implementation once the text enters into force:

  • 6 months: bans on AI applications with “unacceptable risk” go into effect
  • 9 months: regulators establish “code of practice” for AI models
  • 12 months: law applies to “general purpose AI models” (non high-risk)
  • 36 months: obligations on “high risk” AI models go into effect

There is a push in Africa to start regulating AI, as the use of AI systems has been expanding across the continent. The African Union, including 55 member nations, is preparing an AI policy to further develop and regulate the use of AI. However, there has been early debate about whether regulation is warranted and the impact it might have on innovation. Seven African nations have already developed national AI policies and towards the end of February the African Union Development Agency published a policy draft to serve as the blueprint of AI regulations for African nations.

The UN unanimously adopted a US-led resolution on AI technologies. The resolution aims to lay out a comprehensive vision for “safe, secure, and trustworthy AI” and is based on the voluntary commitments put forth by the White House in partnership with leading AI companies last fall. This marks a critical step towards establishing international agreement on guardrails for the ethical and sustainable development of AI. At its core it encourages protecting personal data, monitoring AI for risks, and safe-guarding human rights.

The Indian IT Ministry has issued a new advisory that asks tech firms to seek “government approval” before releasing AI models that are “unreliable” or still in trial phase. The ministry stated “availability to the users on Indian Internet must be done so with explicit permission of the Government of India.”

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure AI applications from security and safety vulnerabilities. Our platform evaluates models for vulnerabilities throughout the AI lifecycle and protects applications in real time. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

May 31, 2021
-
5
minute read

Failure Modes When Productionizing AI Systems

For:
August 9, 2022
-
4
minute read

Introducing the ML Model Attribution Challenge

For:
February 10, 2022
-
4
minute read

Bias in Hiring, the EEOC, and How RI Can Help

For:
Compliance Teams
No items found.