January 30, 2024
-
4
minute read

AI Governance Policy Roundup (January 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

January 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

NIST is set to launch the US AI Safety Institute in the near future (Robust Intelligence is excited to have the opportunity to be a part of the institute’s consortium) with the goals of “[establishing] guidelines and processes to enable developers of generative AI to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems,” among other mandates.

NIST released an updated version of the Adversarial Machine Learning: A Taxonomy and Terminology on Attacks and Mitigations report, which was co-authored by NIST, Robust Intelligence, and Northeastern University. This taxonomy helps create a foundation for the AI security community about the comprehensive AI threat landscape and incorporates novel generative AI risks.

Congress continues to see a surge of new AI governance activity. Recently, Representatives Beyer and Eshoo introduced the “AI Foundation Model Transparency Act of 2023” in order to improve transparency around model training and capabilities, specifically setting standards for what high impact foundation model developers should be required to report to the FTC. This is in addition to others bills that have been introduced in Congress over the last couple months that are helping back the White House’s Executive Order on AI.

After releasing its AI Adoption Strategy last November, the DOD focuses on increasing AI capacity. Michael C. Horowitz, deputy assistant secretary of defense for force development and emerging capabilities, cited “the creation of the Chief Digital and Artificial Intelligence Office, responsible for the department-wide adoption of data. He also cited recent strategy updates aimed at aligning AI adoption with broader defense strategy.”

International

The ‘final’ text of the EU AI Act has been leaked and is progressing toward a full vote on February 2. The AI Act’s implications affect businesses developing and deploying AI; transparency requirements, investment in risk management and cybersecurity tools, mandatory documentation and record-keeping, and continued need for human oversight and intervention.

  • Register for our webinar on February 1 to learn more from Dan Nechita, Head of Cabinet MEP Dragos Tudorache at European Parliament, about the recently finalized EU AI Act.

The EU Commission has established an AI Office to monitor the implementation of the EU AI Act and oversee the compliance of high-risk AI systems with the Act. This will serve as the key coordinating office between EU agencies and commission departments for AI Policy in the EU.

Singapore’s IMDA and the AI Verify Foundation released a Model AI Governance Framework for Generative AI. The paper addresses nine governance considerations: 1) Accountability, 2) Data, 3) Trusted Development and Deployment, 4) Incident Reporting, 5) Testing and Assurance, 6) Security, 7) Content Provenance, 8) Safety and Alignment Research and Development, and 9) AI for Public Good. This framework helps align companies on how to effectively govern generative AI.

Generative AI dominated discussions at the World Economic Forum Annual Meeting in Davos as companies focus on safety and accuracy (”the biggest topic”). Companies adopting AI are moving from “talk to action” in starting to “shift means factoring in new considerations around cost, scalability, efficiency and latency, not to mention safety and privacy.”

Some other news

The UN AI Advisory Board released its first interim report, “Governing AI for Humanity.” The report emphasizes the importance of a global approach to AI governance that extends beyonds region-specific schemes like the EU AI Act and the US AI Executive Order. The report, in part, serves to provide a new framework to unite global perspectives to achieve this goal.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

January 30, 2024
-
4
minute read

AI Governance Policy Roundup (January 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

January 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

NIST is set to launch the US AI Safety Institute in the near future (Robust Intelligence is excited to have the opportunity to be a part of the institute’s consortium) with the goals of “[establishing] guidelines and processes to enable developers of generative AI to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems,” among other mandates.

NIST released an updated version of the Adversarial Machine Learning: A Taxonomy and Terminology on Attacks and Mitigations report, which was co-authored by NIST, Robust Intelligence, and Northeastern University. This taxonomy helps create a foundation for the AI security community about the comprehensive AI threat landscape and incorporates novel generative AI risks.

Congress continues to see a surge of new AI governance activity. Recently, Representatives Beyer and Eshoo introduced the “AI Foundation Model Transparency Act of 2023” in order to improve transparency around model training and capabilities, specifically setting standards for what high impact foundation model developers should be required to report to the FTC. This is in addition to others bills that have been introduced in Congress over the last couple months that are helping back the White House’s Executive Order on AI.

After releasing its AI Adoption Strategy last November, the DOD focuses on increasing AI capacity. Michael C. Horowitz, deputy assistant secretary of defense for force development and emerging capabilities, cited “the creation of the Chief Digital and Artificial Intelligence Office, responsible for the department-wide adoption of data. He also cited recent strategy updates aimed at aligning AI adoption with broader defense strategy.”

International

The ‘final’ text of the EU AI Act has been leaked and is progressing toward a full vote on February 2. The AI Act’s implications affect businesses developing and deploying AI; transparency requirements, investment in risk management and cybersecurity tools, mandatory documentation and record-keeping, and continued need for human oversight and intervention.

  • Register for our webinar on February 1 to learn more from Dan Nechita, Head of Cabinet MEP Dragos Tudorache at European Parliament, about the recently finalized EU AI Act.

The EU Commission has established an AI Office to monitor the implementation of the EU AI Act and oversee the compliance of high-risk AI systems with the Act. This will serve as the key coordinating office between EU agencies and commission departments for AI Policy in the EU.

Singapore’s IMDA and the AI Verify Foundation released a Model AI Governance Framework for Generative AI. The paper addresses nine governance considerations: 1) Accountability, 2) Data, 3) Trusted Development and Deployment, 4) Incident Reporting, 5) Testing and Assurance, 6) Security, 7) Content Provenance, 8) Safety and Alignment Research and Development, and 9) AI for Public Good. This framework helps align companies on how to effectively govern generative AI.

Generative AI dominated discussions at the World Economic Forum Annual Meeting in Davos as companies focus on safety and accuracy (”the biggest topic”). Companies adopting AI are moving from “talk to action” in starting to “shift means factoring in new considerations around cost, scalability, efficiency and latency, not to mention safety and privacy.”

Some other news

The UN AI Advisory Board released its first interim report, “Governing AI for Humanity.” The report emphasizes the importance of a global approach to AI governance that extends beyonds region-specific schemes like the EU AI Act and the US AI Executive Order. The report, in part, serves to provide a new framework to unite global perspectives to achieve this goal.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

December 21, 2023
-
4
minute read

AI Governance Policy Roundup (December 2023)

For:
April 2, 2024
-
3
minute read

Robust Intelligence Partners with CrowdStrike to Bring Our Real-Time AI Security Telemetry to Falcon LogScale

For:
March 13, 2024
-
2
minute read

Robust Intelligence Collaborates with Hitachi Solutions to Support AI Governance

For:
December 21, 2023
-
4
minute read

AI Governance Policy Roundup (December 2023)

For:
October 30, 2023
-
5
minute read

The White House Executive Order on AI: Assessing AI Risk with Automated Testing

For:
June 20, 2023
-
5
minute read

Why We Need Risk Assessments for Generative AI

For:
Model Compliance Assessment