AI Security Interview Series: Jailbreaking LLMs Automatically featuring Amin Karbasi

For the first interview of our AI security interview series, we bring you Amin Karbasi, Associate Professor at Yale, and Yaron Singer, CEO of Robust Intelligence. They are co-authors of the method Tree of Attacks, which involves jailbreaking black box LLMs automatically. In this session they chat about their automated approach designed to perform jailbreaking on LLM, particularly on the black box LLM that is unavailable today.

AI Security Interview Series: Jailbreaking LLMs Automatically featuring Amin Karbasi

For the first interview of our AI security interview series, we bring you Amin Karbasi, Associate Professor at Yale, and Yaron Singer, CEO of Robust Intelligence. They are co-authors of the method Tree of Attacks, which involves jailbreaking black box LLMs automatically. In this session they chat about their automated approach designed to perform jailbreaking on LLM, particularly on the black box LLM that is unavailable today.