September 10, 2024
-
5
minute read

Leveraging Hardened Cybersecurity Frameworks for AI Security through the Common Weakness Enumeration (CWE)

In the rapidly evolving landscape of artificial intelligence, AI security is one aspect of paramount concern. As AI systems are increasingly integrated into critical infrastructures, from healthcare to finance, ensuring their robustness against malicious actors is crucial.

Although AI is not a new technology, given its recent advancements in usage and deployment in distributed and large-scale architectures, the control plane and the data plane are generally not distinct. This contrast from traditional software systems introduces complexity in how security weaknesses or vulnerabilities in AI systems are identified, managed, and mitigated. As AI security issues are a growing concern, one effective strategy for managing and mitigating risk is through using established, applicable cybersecurity frameworks. Among these, the Common Weakness Enumeration (CWE) AI Working Group now reviews AI-related submissions through the Content Developer Repository (CDR) on GitHub and formally publishes CWE-identifiable entries.

The Common Weakness Enumerations (CWE) list was created in 2006 and comprises a catalog of security weaknesses including descriptions of each weakness, potential consequences, and examples to aid understanding and mitigating a security risk. Used in a traditional cybersecurity context, a security weakness is a condition in software, firmware, hardware, or a service component, that under certain circumstances could contribute to the introduction of security vulnerabilities. When an AI-related weakness can be mitigated early in the AI development lifecycle, significant costs can be saved by preventing possible downstream AI-related vulnerabilities.

Initiated by the MITRE Corporation, the CWE list became a standard through the collaborative efforts of the software security community and gained traction as it provided a standardized taxonomy for identifying and categorizing software weaknesses essential for improving software security best practices. Over time, the widespread adoption of CWEs by organizations, integration into security tools, and endorsement by industry and government bodies solidified its status as a critical standard to support cybersecurity best practices.

With 900+ entries, a CWE represents a software or hardware weakness that may or may not map to one or many software or hardware vulnerabilities in a variety of applications or products. Today, security practitioners and engineers rely on CWE as part of the secure software development lifecycle, and a measure for Secure by Design. CWEs are ideally addressed in the design or development phase to prevent post-deployment downstream effects such as costly security vulnerabilities.

The advantages of AI-related CWE submissions intake and review process enables businesses to address AI-related security weaknesses through a hardened security apparatus that is already utilized in the existing secure software development lifecycle and cybersecurity incident response activities. In this first blogpost of an upcoming sequence, the first published AI-related CWE entries are introduced and formalized around what it means for AI security. 

The Common Weakness Enumeration in the context of AI: 

With CWE’s recent expansion into AI to address the growing need for standardized identification and categorization of weaknesses specific to artificial intelligence systems, the CWE AI Working Group assembled experts from various technical fields across industry and government. Through this collaboration, real and perceived AI-related gaps that are within scope for identifiable CWEs are reviewed and published. The process entails intaking submissions from the community and identifying what is within scope for an AI-related weakness through discussion in a CWE AI Working Group. The AI-related weakness is then formally developed according to the CWE schema, documented, and published as a CWE entry. 

Through these efforts, new AI-related CWEs were published in the CWE 4.15 July release. CWE-1426 highlights the danger of not properly validating AI-generated outputs and CWE-1039, an automated recognition mechanism with inadequate detection or handling of adversarial input perturbations, was updated as an AI-related weakness. Additionally, a new demonstrative example was published in CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection'), which shows how AI outputs can be manipulated for command injection attacks. In addition, an observed example is a publicly reported vulnerability in real-world products that exhibit the weakness. New AI-related observed examples were published to several entries including SQL injection (CWE-89), path traversal (CWE-22), and code injection (CWE-94).

What does this mean for AI Security?

Understanding the AI-security weakness that can result in an AI-related vulnerability enables engineers to mitigate them before AI model deployment, strengthening the AI pipeline and saving costs by preventing downstream effects.

Robust Intelligence’s participation in the leadership committee of the CWE AI Working Group brings specialized AI expertise to help scope AI-related security weaknesses. This further enables a more precise understanding of AI-related security risks, identification of root cause issues, and mitigation plans that align with existing cybersecurity protocols. This leadership and expertise aids in developing effective mitigation strategies to AI technologies. Furthermore, the collaboration and knowledge sharing among non-profit, government, and industry experts uplifts all sectors in formalizing an operational framework around AI-related security weaknesses.

As CWE continues to review AI-related submissions, future content releases are upcoming in late 2024 and early 2025. In collaboration with industry, government, and community partners, Robust Intelligence continues to contribute to the analysis and review of AI-related submissions, enabling improved AI security root cause analysis for stakeholders.

Acknowledgements:

  • Alec Summers, Principal Cybersecurity Engineer, CVE/CWE Project Leader, MITRE
  • Steve Christey Coley, Principal Information Security Engineer, CWE Technical Lead, MITRE
September 10, 2024
-
5
minute read

Leveraging Hardened Cybersecurity Frameworks for AI Security through the Common Weakness Enumeration (CWE)

In the rapidly evolving landscape of artificial intelligence, AI security is one aspect of paramount concern. As AI systems are increasingly integrated into critical infrastructures, from healthcare to finance, ensuring their robustness against malicious actors is crucial.

Although AI is not a new technology, given its recent advancements in usage and deployment in distributed and large-scale architectures, the control plane and the data plane are generally not distinct. This contrast from traditional software systems introduces complexity in how security weaknesses or vulnerabilities in AI systems are identified, managed, and mitigated. As AI security issues are a growing concern, one effective strategy for managing and mitigating risk is through using established, applicable cybersecurity frameworks. Among these, the Common Weakness Enumeration (CWE) AI Working Group now reviews AI-related submissions through the Content Developer Repository (CDR) on GitHub and formally publishes CWE-identifiable entries.

The Common Weakness Enumerations (CWE) list was created in 2006 and comprises a catalog of security weaknesses including descriptions of each weakness, potential consequences, and examples to aid understanding and mitigating a security risk. Used in a traditional cybersecurity context, a security weakness is a condition in software, firmware, hardware, or a service component, that under certain circumstances could contribute to the introduction of security vulnerabilities. When an AI-related weakness can be mitigated early in the AI development lifecycle, significant costs can be saved by preventing possible downstream AI-related vulnerabilities.

Initiated by the MITRE Corporation, the CWE list became a standard through the collaborative efforts of the software security community and gained traction as it provided a standardized taxonomy for identifying and categorizing software weaknesses essential for improving software security best practices. Over time, the widespread adoption of CWEs by organizations, integration into security tools, and endorsement by industry and government bodies solidified its status as a critical standard to support cybersecurity best practices.

With 900+ entries, a CWE represents a software or hardware weakness that may or may not map to one or many software or hardware vulnerabilities in a variety of applications or products. Today, security practitioners and engineers rely on CWE as part of the secure software development lifecycle, and a measure for Secure by Design. CWEs are ideally addressed in the design or development phase to prevent post-deployment downstream effects such as costly security vulnerabilities.

The advantages of AI-related CWE submissions intake and review process enables businesses to address AI-related security weaknesses through a hardened security apparatus that is already utilized in the existing secure software development lifecycle and cybersecurity incident response activities. In this first blogpost of an upcoming sequence, the first published AI-related CWE entries are introduced and formalized around what it means for AI security. 

The Common Weakness Enumeration in the context of AI: 

With CWE’s recent expansion into AI to address the growing need for standardized identification and categorization of weaknesses specific to artificial intelligence systems, the CWE AI Working Group assembled experts from various technical fields across industry and government. Through this collaboration, real and perceived AI-related gaps that are within scope for identifiable CWEs are reviewed and published. The process entails intaking submissions from the community and identifying what is within scope for an AI-related weakness through discussion in a CWE AI Working Group. The AI-related weakness is then formally developed according to the CWE schema, documented, and published as a CWE entry. 

Through these efforts, new AI-related CWEs were published in the CWE 4.15 July release. CWE-1426 highlights the danger of not properly validating AI-generated outputs and CWE-1039, an automated recognition mechanism with inadequate detection or handling of adversarial input perturbations, was updated as an AI-related weakness. Additionally, a new demonstrative example was published in CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection'), which shows how AI outputs can be manipulated for command injection attacks. In addition, an observed example is a publicly reported vulnerability in real-world products that exhibit the weakness. New AI-related observed examples were published to several entries including SQL injection (CWE-89), path traversal (CWE-22), and code injection (CWE-94).

What does this mean for AI Security?

Understanding the AI-security weakness that can result in an AI-related vulnerability enables engineers to mitigate them before AI model deployment, strengthening the AI pipeline and saving costs by preventing downstream effects.

Robust Intelligence’s participation in the leadership committee of the CWE AI Working Group brings specialized AI expertise to help scope AI-related security weaknesses. This further enables a more precise understanding of AI-related security risks, identification of root cause issues, and mitigation plans that align with existing cybersecurity protocols. This leadership and expertise aids in developing effective mitigation strategies to AI technologies. Furthermore, the collaboration and knowledge sharing among non-profit, government, and industry experts uplifts all sectors in formalizing an operational framework around AI-related security weaknesses.

As CWE continues to review AI-related submissions, future content releases are upcoming in late 2024 and early 2025. In collaboration with industry, government, and community partners, Robust Intelligence continues to contribute to the analysis and review of AI-related submissions, enabling improved AI security root cause analysis for stakeholders.

Acknowledgements:

  • Alec Summers, Principal Cybersecurity Engineer, CVE/CWE Project Leader, MITRE
  • Steve Christey Coley, Principal Information Security Engineer, CWE Technical Lead, MITRE
Blog

Related articles

August 25, 2021
-
4
minute read

Machine Learning for eCommerce Fraud Management with Riskified's CTO

For:
April 26, 2024
-
5
minute read

AI Cyber Threat Intelligence Roundup: April 2024

For:
May 29, 2024
-
4
minute read

Robust Intelligence wins three prestigious cybersecurity awards in May

For:
July 29, 2024
-
5
minute read

Bypassing Meta’s LLaMA Classifier: A Simple Jailbreak

For:
August 1, 2024
-
4
minute read

Four ways AI application security differs from traditional application security

For:
May 28, 2024
-
5
minute read

Fine-Tuning LLMs Breaks Their Safety and Security Alignment

For: