Partnering across the ecosystem to deliver AI security

Through our partnerships, we’re building community and stronger AI security technology.

Our Partners

Learn how Robust Intelligence integrates with leading technology platforms, enhances service provider offerings, and contributes to the advancement of AI and security standards.
Technology

Connect Robust Intelligence with the CrowdStrike next-gen SIEM to analyze our AI Firewall data and mitigate risk in real time.

Technology

Measure and monitor AI risk in real time by integrating AI Firewall and the DataDog observability platform.

Standards

In addition to being a founding member of MITRE ATLAS, we partnered to open-source our AI Risk Database.

Technology

Our integration with MongoDB Atlas Vector Search enhances LLM outputs while safeguarding against undesired responses.

Technology

Our technology partnership enables companies to test models natively within Databricks to protect companies from AI risk.

Services

NEC partners with Robust Intelligence to validate their enterprise-ready generative AI offerings, ensuring safety and reliability.

Services

Recognizing the business impact of AI, Deloitte uses Robust Intelligence to provide AI services that are effective and secure.

Services

NTT Data is a global leader in IT services. They partner with Robust Intelligence to deploy safe, high-performance AI models.

Standards

We collaborate with NIST to develop standards including the Adversarial Machine Learning Taxonomy and U.S. AI Safety Institute Consortium.

Standards

Our team regular contributions to the development of OWASP guidelines, including the Top 10 for LLM Applications.

Services

Hitachi Solutions partners with Robust Intelligence to support the safe and secure development and deployment of AI applications.

Services

KPMG partners with Robust Intelligence to ensure their clients' AI applications are safe, secure, and meet regulatory standards.

*

Robust Intelligence is collaborating with the National Institute of Standards and Technology (NIST) in the Artificial Intelligence Safety Institute Consortium to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies. NIST does not evaluate commercial products under this Consortium and does not endorse any product or service used. Additional information on this Consortium can be found here.

Lena Smart
Chief Information Security Officer
Generative AI introduces an unmanaged security risk, which is compounded when enriching LLMs with supplemental data. Robust Intelligence's AI Firewall solves this critical problem, giving enterprises the confidence to use LLMs at scale. Our partnership makes it easier for customers to use generative AI while also keeping their data secure with guardrails in place.
Douglas Robbins
VP, Engineering & Prototyping
This collaboration and release of the AI Risk Database can directly enable more organizations to see for themselves how they are directly at risk and vulnerable in deploying specific types of AI-enabled systems. As the latest open-source tool under MITRE ATLAS, this capability will continue to inform risk assessment and mitigation priorities for organizations around the globe.