Artwork

Contenuto fornito da EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Azeem’s Picks: How to Practice Responsible AI with Dr. Rumman Chowdhury

48:54
 
Condividi
 

Manage episode 381617991 series 2498265
Contenuto fornito da EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Artificial Intelligence (AI) is on every business leader’s agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise.

Today’s pick is Azeem’s conversation with Dr. Rumman Chowdhury, a pioneer in the field of applied algorithmic ethics. She runs Parity Consulting, the Parity Responsible Innovation Fund, and she’s a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University.

They discuss:

  • How you can assess and diagnose bias in unexplainable “black box” algorithms.
  • Why responsible AI demands top-down organizational change, implementing new metrics, and systems of redress.
  • More details on the emerging field of “Responsible Machine Learning Operations”.

Further resources:

  continue reading

186 episodi

Artwork
iconCondividi
 
Manage episode 381617991 series 2498265
Contenuto fornito da EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da EPIIPLUS 1 Ltd / Azeem Azhar and Azeem Azhar o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Artificial Intelligence (AI) is on every business leader’s agenda. How do you ensure the AI systems you deploy are harmless and trustworthy? This month, Azeem picks some of his favorite conversations with leading AI safety experts to help you break through the noise.

Today’s pick is Azeem’s conversation with Dr. Rumman Chowdhury, a pioneer in the field of applied algorithmic ethics. She runs Parity Consulting, the Parity Responsible Innovation Fund, and she’s a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University.

They discuss:

  • How you can assess and diagnose bias in unexplainable “black box” algorithms.
  • Why responsible AI demands top-down organizational change, implementing new metrics, and systems of redress.
  • More details on the emerging field of “Responsible Machine Learning Operations”.

Further resources:

  continue reading

186 episodi

Semua episod

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida