January 16, 2024
-
5
minute read

AI Security Insights from Hackers on the Hill

Last week, I had the incredible opportunity to represent Robust Intelligence at the Hackers on the Hill conference in Washington D.C. This two-day event, organized annually by the “I Am The Cavalry” organization since 2017, provides a unique forum for cybersecurity experts to engage with Congressional staffers on Capitol Hill. Conversations are candid and free from commercial agendas so as to build long-term relationships between the technical community and public policymakers.

In short, Hackers on the Hill 2024 was a tremendous success. Both hackers and staffers were involved in deep discussions about some of the most pressing topics in cybersecurity including critical infrastructure, privacy, supply chain security, and unsurprisingly, artificial intelligence.

Hackers not only attended panel discussions, networking sessions, and plenary sessions, but were also involved in a hands-on event and a small-group briefing for congressional staffers on the issues they care most about. This opportunity for policymakers to participate in and engage with the security community was both helpful and important, especially as new security legislation is considered. After all, there have already been three AI-related bills introduced in Congress since the AI Executive Order was signed on October 30, 2023.

Day 1: AI Security & the Generative Red-Teaming Challenge 

AI Village, AI Risk and Vulnerability Alliance (ARVA), and Hackers on the Hill hosted a generative red-teaming (GRT) challenge: Live AI Hacking on the Hill. Robust Intelligence is proud to have been an organizing committee member. This challenge was approachable for hackers of every skill level and provided many policymakers a first-ever hands-on technical experience with AI red teaming. The ultimate objective of the GRT challenge was to force large language models (LLMs) to respond in a specific and often malicious manner. 

I had the opportunity to demonstrate and gather feedback for the Robust Intelligence AI Firewall, a security layer for generative AI that monitors and intercepts malicious inputs and outputs in real time. Staffers were able to plug attack prompts into the AI Firewall and observe these previously successful attempts get blocked from ever reaching the model itself.

The GRT challenge was so effective in showing staffers first-hand the security risks and potential for abuse with AI content generation. Surprise and concern were common reactions to the ease with which model guardrails were bypassed and toxic content was produced. Staffers were also interested in learning more about the AI Firewall and how it was able to block attacks which model guardrails failed against.

The GRT session was followed by a moderated panel discussing AI vulnerability reporting with six experts answering questions on the topic. Panelists discussed how AI vulnerabilities are currently reported through informal channels like Twitter threads, resulting in a need for a more systematic reporting approach. There is a call for centralized and clear guidelines as the current landscape is often characterized by ambiguity and hand-wavy standards. Panelists called out an absence of "regulation harmonization," which hinders effective AI risk mitigation.

Day 2: Briefing the Senate Committee on Homeland Security

The small-group breakout sessions were a personal highlight of Hackers on the Hill. After staffers educated hackers on the legislative process, groups splintered for more personal cybersecurity discussions. As part of this, I had the opportunity to represent Robust Intelligence in a select presentation to the Senate Committee on Homeland Security. This included a meeting with a staffer for the Senate Subcommittee on Emerging Threats which was both incredible and enlightening.

Through these conversations, I learned more about how Congress thinks about AI security, shared some of the policy work being done at Robust Intelligence, and drove discussions about the regulatory progress we’d like to see in the future.

I found that when it comes to generative AI security, there are two predominant perspectives on Capitol Hill: one of less concern and more excitement about AI potential, and another with a bleaker “doomsday” outlook. Across the board, however, there is a shared acknowledgement that generative AI risks need to be better understood.

During our conversation, I highlighted findings from the NIST Adversarial Machine Learning Taxonomy, a resource which my colleagues from Robust Intelligence co-authored. I underscored the fact that generative models pose unprecedented challenges because their traditional security risks are compounded with new potential for abuse and ethical harm.

We covered specific concerns around prompt injection attacks, the need for model vulnerability transparency, and incentives for fixing these vulnerabilities and building more robust models. We also discussed the difficulty in quantifying harms and the ongoing debate about whether ethical harms, such as biased or hateful content, should be grouped with traditional security issues like sensitive data leakage. These distinctions are important for informing how Congressional committees define and prioritize harms, which will subsequently shape future legislation. Finally, we reflected on the European Union AI Act and considered how some of its governance and compliance requirements might inform future American regulations.

Takeaways and a look ahead

During my time at the conference, there was a recurring emphasis on the Congressional struggle to formulate actionable cybersecurity guidance stemming from a lack of technical expertise among senior staff members. I found this to be the exact reason why an event like Hackers on the Hill is so critical—it bridges the gap between the technical community and policymakers with honest, open, and informative conversations that will ultimately inform stronger legislation.

In summary, my key takeaways from there event were as follows:

  • Until recently, discussions on AI policy had focused on risks pertaining to bias and fairness. The threat posed by bad actors attacking AI systems has been recognized as a real and growing concern, which is exacerbated by generative AI. Policymakers are motivated to take action.
  • While there is undoubtedly a ton of discussion and work going into AI security, there remains a general concern about how to practically operationalize recommendations, taxonomies, and standards within organizations.
  • Policymakers recognize that efforts to define and prioritize harms will considerably impact future legislation, and that it will take time to come to agreement and advance policy. Organizations can’t afford to wait and will need to take action in the interim.

As experts in AI security, Robust Intelligence is helping to inform various regulations and standards. These include our co-authorship of the NIST Adversarial Machine Learning Taxonomy, co-development and open sourcing of the AI Risk Database with MITRE, and contributions to the OWASP Top 10 for LLMs. Hackers on the Hill is just another way we’re getting involved with the hopes of educating the community and driving meaningful advancement towards more secure AI, and I was honored to participate and share our perspective.

To learn more about our end-to-end AI security solution and how we help organizations solve the challenges listed above, contact us to schedule a demo.

January 16, 2024
-
5
minute read

AI Security Insights from Hackers on the Hill

Last week, I had the incredible opportunity to represent Robust Intelligence at the Hackers on the Hill conference in Washington D.C. This two-day event, organized annually by the “I Am The Cavalry” organization since 2017, provides a unique forum for cybersecurity experts to engage with Congressional staffers on Capitol Hill. Conversations are candid and free from commercial agendas so as to build long-term relationships between the technical community and public policymakers.

In short, Hackers on the Hill 2024 was a tremendous success. Both hackers and staffers were involved in deep discussions about some of the most pressing topics in cybersecurity including critical infrastructure, privacy, supply chain security, and unsurprisingly, artificial intelligence.

Hackers not only attended panel discussions, networking sessions, and plenary sessions, but were also involved in a hands-on event and a small-group briefing for congressional staffers on the issues they care most about. This opportunity for policymakers to participate in and engage with the security community was both helpful and important, especially as new security legislation is considered. After all, there have already been three AI-related bills introduced in Congress since the AI Executive Order was signed on October 30, 2023.

Day 1: AI Security & the Generative Red-Teaming Challenge 

AI Village, AI Risk and Vulnerability Alliance (ARVA), and Hackers on the Hill hosted a generative red-teaming (GRT) challenge: Live AI Hacking on the Hill. Robust Intelligence is proud to have been an organizing committee member. This challenge was approachable for hackers of every skill level and provided many policymakers a first-ever hands-on technical experience with AI red teaming. The ultimate objective of the GRT challenge was to force large language models (LLMs) to respond in a specific and often malicious manner. 

I had the opportunity to demonstrate and gather feedback for the Robust Intelligence AI Firewall, a security layer for generative AI that monitors and intercepts malicious inputs and outputs in real time. Staffers were able to plug attack prompts into the AI Firewall and observe these previously successful attempts get blocked from ever reaching the model itself.

The GRT challenge was so effective in showing staffers first-hand the security risks and potential for abuse with AI content generation. Surprise and concern were common reactions to the ease with which model guardrails were bypassed and toxic content was produced. Staffers were also interested in learning more about the AI Firewall and how it was able to block attacks which model guardrails failed against.

The GRT session was followed by a moderated panel discussing AI vulnerability reporting with six experts answering questions on the topic. Panelists discussed how AI vulnerabilities are currently reported through informal channels like Twitter threads, resulting in a need for a more systematic reporting approach. There is a call for centralized and clear guidelines as the current landscape is often characterized by ambiguity and hand-wavy standards. Panelists called out an absence of "regulation harmonization," which hinders effective AI risk mitigation.

Day 2: Briefing the Senate Committee on Homeland Security

The small-group breakout sessions were a personal highlight of Hackers on the Hill. After staffers educated hackers on the legislative process, groups splintered for more personal cybersecurity discussions. As part of this, I had the opportunity to represent Robust Intelligence in a select presentation to the Senate Committee on Homeland Security. This included a meeting with a staffer for the Senate Subcommittee on Emerging Threats which was both incredible and enlightening.

Through these conversations, I learned more about how Congress thinks about AI security, shared some of the policy work being done at Robust Intelligence, and drove discussions about the regulatory progress we’d like to see in the future.

I found that when it comes to generative AI security, there are two predominant perspectives on Capitol Hill: one of less concern and more excitement about AI potential, and another with a bleaker “doomsday” outlook. Across the board, however, there is a shared acknowledgement that generative AI risks need to be better understood.

During our conversation, I highlighted findings from the NIST Adversarial Machine Learning Taxonomy, a resource which my colleagues from Robust Intelligence co-authored. I underscored the fact that generative models pose unprecedented challenges because their traditional security risks are compounded with new potential for abuse and ethical harm.

We covered specific concerns around prompt injection attacks, the need for model vulnerability transparency, and incentives for fixing these vulnerabilities and building more robust models. We also discussed the difficulty in quantifying harms and the ongoing debate about whether ethical harms, such as biased or hateful content, should be grouped with traditional security issues like sensitive data leakage. These distinctions are important for informing how Congressional committees define and prioritize harms, which will subsequently shape future legislation. Finally, we reflected on the European Union AI Act and considered how some of its governance and compliance requirements might inform future American regulations.

Takeaways and a look ahead

During my time at the conference, there was a recurring emphasis on the Congressional struggle to formulate actionable cybersecurity guidance stemming from a lack of technical expertise among senior staff members. I found this to be the exact reason why an event like Hackers on the Hill is so critical—it bridges the gap between the technical community and policymakers with honest, open, and informative conversations that will ultimately inform stronger legislation.

In summary, my key takeaways from there event were as follows:

  • Until recently, discussions on AI policy had focused on risks pertaining to bias and fairness. The threat posed by bad actors attacking AI systems has been recognized as a real and growing concern, which is exacerbated by generative AI. Policymakers are motivated to take action.
  • While there is undoubtedly a ton of discussion and work going into AI security, there remains a general concern about how to practically operationalize recommendations, taxonomies, and standards within organizations.
  • Policymakers recognize that efforts to define and prioritize harms will considerably impact future legislation, and that it will take time to come to agreement and advance policy. Organizations can’t afford to wait and will need to take action in the interim.

As experts in AI security, Robust Intelligence is helping to inform various regulations and standards. These include our co-authorship of the NIST Adversarial Machine Learning Taxonomy, co-development and open sourcing of the AI Risk Database with MITRE, and contributions to the OWASP Top 10 for LLMs. Hackers on the Hill is just another way we’re getting involved with the hopes of educating the community and driving meaningful advancement towards more secure AI, and I was honored to participate and share our perspective.

To learn more about our end-to-end AI security solution and how we help organizations solve the challenges listed above, contact us to schedule a demo.

Blog

Related articles

February 3, 2022
-
3
minute read

Introducing our Incredible ML Team!

For:
September 20, 2024
-
5
minute read

Extracting Training Data from Chatbots

For:
November 16, 2021
-
4
minute read

Zillow iBuying: What Happened and Lessons Learned

For:
January 9, 2024
-
5
minute read

Robust Intelligence Co-authors NIST Adversarial Machine Learning Taxonomy

For:
December 21, 2023
-
4
minute read

AI Governance Policy Roundup (December 2023)

For:
October 30, 2023
-
5
minute read

The White House Executive Order on AI: Assessing AI Risk with Automated Testing

For: