Artwork

Contenuto fornito da Kashif Manzoor. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Kashif Manzoor o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

AI Security and Agentic Risks Every Business Needs to Understand with Alexander Schlager

27:42
 
Condividi
 

Manage episode 506108095 series 2922369
Contenuto fornito da Kashif Manzoor. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Kashif Manzoor o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation.

Chapters

00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions

Episode # 166

Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai

He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation.

What Listeners Will Learn:

  • Why real-time AI security and runtime protection are essential for safe deployments
  • How explainable AI builds trust with users and regulators
  • The unique risks of agentic AI and how to manage them responsibly
  • Why AI safety and governance are becoming strategic priorities for companies
  • How education, awareness, and upskilling help close the AI skills gap
  • Why natural language processing (NLP) is becoming the default interface for enterprise technology

Keywords:

AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning

Resources:

  continue reading

177 episodi

Artwork
iconCondividi
 
Manage episode 506108095 series 2922369
Contenuto fornito da Kashif Manzoor. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Kashif Manzoor o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation.

Chapters

00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions

Episode # 166

Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai

He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation.

What Listeners Will Learn:

  • Why real-time AI security and runtime protection are essential for safe deployments
  • How explainable AI builds trust with users and regulators
  • The unique risks of agentic AI and how to manage them responsibly
  • Why AI safety and governance are becoming strategic priorities for companies
  • How education, awareness, and upskilling help close the AI skills gap
  • Why natural language processing (NLP) is becoming the default interface for enterprise technology

Keywords:

AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning

Resources:

  continue reading

177 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida

Ascolta questo spettacolo mentre esplori
Riproduci