Vai offline con l'app Player FM !
Podcast che vale la pena ascoltare
SPONSORIZZATO


MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps; With Guest: Johann Rehberger
Manage episode 361711037 series 3461851
Johann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com.
In this episode, Johann offers insights about how to apply a traditional security engineering mindset and red team approach to analyzing the AI/ML attack surface. We also discuss ways that organizations can adapt their traditional security postures to address the unique challenges of ML security.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
50 episodi
Manage episode 361711037 series 3461851
Johann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com.
In this episode, Johann offers insights about how to apply a traditional security engineering mindset and red team approach to analyzing the AI/ML attack surface. We also discuss ways that organizations can adapt their traditional security postures to address the unique challenges of ML security.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
50 episodi
すべてのエピソード
×
1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39




1 A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer 29:25

1 ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt 35:33

1 Privacy Engineering: Safeguarding AI & ML Systems in a Data-Driven Era; With Guest Katharine Jarmul 46:44




1 Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake 36:14


1 ML Security: AI Incident Response Plans and Enterprise Risk Culture; With Guest: Patrick Hall 38:49

1 MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps; With Guest: Johann Rehberger 40:29
Benvenuto su Player FM!
Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.