June 21, 2021
-
1
minute read

How To Secure AI Systems @ Stanford MLSys Seminar

As organizations adopt AI technologies they inherit AI failures. These failures often manifest themselves in AI models that produce erroneous predictions that go undetected. In Stanford MLSys Seminar 2021, Robust Intelligence Co-founder & CEO Yaron Singer discusses root causes for AI models going haywire, and present a rigorous framework for eliminating risk from AI. He shows how this methodology can be used as building blocks for continuous testing and firewall systems for AI.

June 21, 2021
-
1
minute read

How To Secure AI Systems @ Stanford MLSys Seminar

As organizations adopt AI technologies they inherit AI failures. These failures often manifest themselves in AI models that produce erroneous predictions that go undetected. In Stanford MLSys Seminar 2021, Robust Intelligence Co-founder & CEO Yaron Singer discusses root causes for AI models going haywire, and present a rigorous framework for eliminating risk from AI. He shows how this methodology can be used as building blocks for continuous testing and firewall systems for AI.

Blog

Related articles

June 14, 2022
-
4
minute read

ML Security Evasion Competition 2022

For:
November 23, 2021
-
6
minute read

Machine Learning Actionability: Fixing Problems with Your Model Pipelines

For:
June 20, 2023
-
5
minute read

Why We Need Risk Assessments for Generative AI

For:
Model Compliance Assessment
No items found.