February 29, 2024
-
4
minute read

AI Governance Policy Roundup (February 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

February 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

In the three months following Biden’s announcement of the AI Executive Order a lot of progress has already been made. Some of the major accomplishments so far have been (1) using the Defense Production Act to compel developers of powerful AI systems to report AI safety results to the Department of Commerce and (2) completing risk assessments covering AI’s use in every critical infrastructure sector.

The Department of Justice is taking a harsher stance on crimes involving the misuse of AI, “going forward, where Department of Justice prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI, they will,” Deputy Attorney General Lisa Monaco. This is further fueling the push for US AI regulation.

The Department of Commerce officially launched the U.S. AI Safety Institute Consortium (AISIC). Robust Intelligence is pleased to announce our participation as an inaugural member working to support the safe and trustworthy development of AI systems. We are excited to work with the National Institute of Standards and Technology (NIST) to take part in advancing this mission and meeting mandates from President Biden’s executive order on AI, namely: “[establishing] guidelines and processes to enable developers of generative AI to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.”

Senator Wiener (of California) proposed a new bill (SB 1047) this month: Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. It’s been getting positive reactions from the AI community (Geoffrey Hinton called SB 1047 “a very sensible approach” to balance regulation and innovation). The key implication of this for companies is the hope to establish a safety standard for developers of the largest and most powerful AI systems.

U.S. House of Representatives has formed a bipartisan AI Task Force. This was in part a reaction to stalled efforts in Congress to pass legislation despite a spike of legislative proposals over the past year. The task force is charged with considering appropriate guardrails “to safeguard the nation against current and emerging threats.”

International

Over the past month there were developments with the EU AI Act:

  • February 2: the AI Act was unanimously approved by the Council of EU Ministers (representing the EU’s 27 member state governments).
  • February 13: the Internal Market and Civil Liberties Committees voted to approve the result of negotiations with member states on the AI Act. Next there will be formal adoption in a Parliament plenary session and a final Council endorsement.
  • February 21: the European AI Office was established within the Commission. The new AI Office will be responsible for overseeing the implementation of the rules in addition to other forthcoming responsibilities.

The UK’s National Cyber Security Center (NCSC) published an assessment of the “near-term impact of AI on the cyber threat.” NCSC’s assessment concludes that there will “almost certainly” be an increase in the number and impact of cyber attacks. However, the assessment notes that the near term AI security risk landscape shouldn’t be shifting too dramatically because most new threats are built off of existing attack techniques.

Japan’s ruling party, the Liberal Democratic Party, is pushing for AI legislation in 2024 by drafting preliminary rules (including penal regulations) for foundation model developers. This push has been fueled by growing concerns surrounding disinformation and rights infringements associated with the use of AI.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

February 29, 2024
-
4
minute read

AI Governance Policy Roundup (February 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

February 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

In the three months following Biden’s announcement of the AI Executive Order a lot of progress has already been made. Some of the major accomplishments so far have been (1) using the Defense Production Act to compel developers of powerful AI systems to report AI safety results to the Department of Commerce and (2) completing risk assessments covering AI’s use in every critical infrastructure sector.

The Department of Justice is taking a harsher stance on crimes involving the misuse of AI, “going forward, where Department of Justice prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI, they will,” Deputy Attorney General Lisa Monaco. This is further fueling the push for US AI regulation.

The Department of Commerce officially launched the U.S. AI Safety Institute Consortium (AISIC). Robust Intelligence is pleased to announce our participation as an inaugural member working to support the safe and trustworthy development of AI systems. We are excited to work with the National Institute of Standards and Technology (NIST) to take part in advancing this mission and meeting mandates from President Biden’s executive order on AI, namely: “[establishing] guidelines and processes to enable developers of generative AI to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.”

Senator Wiener (of California) proposed a new bill (SB 1047) this month: Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. It’s been getting positive reactions from the AI community (Geoffrey Hinton called SB 1047 “a very sensible approach” to balance regulation and innovation). The key implication of this for companies is the hope to establish a safety standard for developers of the largest and most powerful AI systems.

U.S. House of Representatives has formed a bipartisan AI Task Force. This was in part a reaction to stalled efforts in Congress to pass legislation despite a spike of legislative proposals over the past year. The task force is charged with considering appropriate guardrails “to safeguard the nation against current and emerging threats.”

International

Over the past month there were developments with the EU AI Act:

  • February 2: the AI Act was unanimously approved by the Council of EU Ministers (representing the EU’s 27 member state governments).
  • February 13: the Internal Market and Civil Liberties Committees voted to approve the result of negotiations with member states on the AI Act. Next there will be formal adoption in a Parliament plenary session and a final Council endorsement.
  • February 21: the European AI Office was established within the Commission. The new AI Office will be responsible for overseeing the implementation of the rules in addition to other forthcoming responsibilities.

The UK’s National Cyber Security Center (NCSC) published an assessment of the “near-term impact of AI on the cyber threat.” NCSC’s assessment concludes that there will “almost certainly” be an increase in the number and impact of cyber attacks. However, the assessment notes that the near term AI security risk landscape shouldn’t be shifting too dramatically because most new threats are built off of existing attack techniques.

Japan’s ruling party, the Liberal Democratic Party, is pushing for AI legislation in 2024 by drafting preliminary rules (including penal regulations) for foundation model developers. This push has been fueled by growing concerns surrounding disinformation and rights infringements associated with the use of AI.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

January 26, 2023
-
5
minute read

A Guide to the NIST AI Risk Management Framework

For:
Compliance Teams
August 16, 2021
-
4
minute read

Does Not Compute: Data Inconsistencies in Machine Learning Pipelines

For:
June 14, 2022
-
4
minute read

ML Security Evasion Competition 2022

For:
January 30, 2024
-
4
minute read

AI Governance Policy Roundup (January 2024)

For:
February 8, 2024
-
3
minute read

Robust Intelligence Announces Participation in Department of Commerce Consortium Dedicated to AI Safety

For:
October 30, 2023
-
5
minute read

The White House Executive Order on AI: Assessing AI Risk with Automated Testing

For: