“Real Attackers Don’t Compute Gradients,” a fireside chat with the co-authors on adversarial ML

Recently presented at IEEE SaTML’23 by co-authors from academia and industry, “Real Attackers Don’t Compute Gradients: Bridging the Gap between Adversarial ML Research and Practice” presents practical considerations often overlooked by academic researchers. Robust Intelligence was privileged to host a webinar with the paper’s authors for a Q&A that includes what gaps have been present between adversarial ML research and security practice, and what recommendations can lead to more concrete progress towards secure machine learning.

We hope you enjoy this conversation with Giovanni Apruzzese (University of Liechtenstein), Fabio Pierazzi (King's College London), David Freeman (Meta), and Hyrum Anderson (Robust Intelligence).

Link to the paper: https://arxiv.org/abs/2212.14315

“Real Attackers Don’t Compute Gradients,” a fireside chat with the co-authors on adversarial ML

Recently presented at IEEE SaTML’23 by co-authors from academia and industry, “Real Attackers Don’t Compute Gradients: Bridging the Gap between Adversarial ML Research and Practice” presents practical considerations often overlooked by academic researchers. Robust Intelligence was privileged to host a webinar with the paper’s authors for a Q&A that includes what gaps have been present between adversarial ML research and security practice, and what recommendations can lead to more concrete progress towards secure machine learning.

We hope you enjoy this conversation with Giovanni Apruzzese (University of Liechtenstein), Fabio Pierazzi (King's College London), David Freeman (Meta), and Hyrum Anderson (Robust Intelligence).

Link to the paper: https://arxiv.org/abs/2212.14315