Artwork

Contenuto fornito da Lawfare and Goat Rodeo. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Lawfare and Goat Rodeo o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Eugene Volokh on AI Libel

53:48
 
Condividi
 

Manage episode 361711864 series 3347538
Contenuto fornito da Lawfare and Goat Rodeo. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Lawfare and Goat Rodeo o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

If someone lies about you, you can usually sue them for defamation. But what if that someone is ChatGPT? Already in Australia, the mayor of a town outside Melbourne has threatened to sue OpenAI because ChatGPT falsely named him a guilty party in a bribery scandal. Could that happen in America? Does our libel law allow that? What does it even mean for a large language model to act with "malice"? Does the First Amendment put any limits on the ability to hold these models, and the companies that make them, accountable for false statements they make? And what's the best way to deal with this problem: private lawsuits or government regulation?

On this episode of Arbiters of Truth, our series on the information ecosystem, Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, discussed these questions with First Amendment expert Eugene Volokh, Professor of Law at UCLA and the author of a draft paper entitled "Large Libel Models.”



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

152 episodi

Artwork

Eugene Volokh on AI Libel

Arbiters of Truth

12 subscribers

published

iconCondividi
 
Manage episode 361711864 series 3347538
Contenuto fornito da Lawfare and Goat Rodeo. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Lawfare and Goat Rodeo o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

If someone lies about you, you can usually sue them for defamation. But what if that someone is ChatGPT? Already in Australia, the mayor of a town outside Melbourne has threatened to sue OpenAI because ChatGPT falsely named him a guilty party in a bribery scandal. Could that happen in America? Does our libel law allow that? What does it even mean for a large language model to act with "malice"? Does the First Amendment put any limits on the ability to hold these models, and the companies that make them, accountable for false statements they make? And what's the best way to deal with this problem: private lawsuits or government regulation?

On this episode of Arbiters of Truth, our series on the information ecosystem, Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, discussed these questions with First Amendment expert Eugene Volokh, Professor of Law at UCLA and the author of a draft paper entitled "Large Libel Models.”



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

152 episodi

كل الحلقات

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida