Artwork

Contenuto fornito da Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

How to Think About AI Consciousness With Anil Seth

47:58
 
Condividi
 

Manage episode 427092085 series 2503772
Contenuto fornito da Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

Frankenstein by Mary Shelley

A free, plain text version of the Shelley’s classic of gothic literature.

OpenAI’s GPT4o Demo

A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma.

What It’s Like to Be a Bat

Thomas Nagel’s essay on the nature of consciousness.

Are You Living in a Computer Simulation?

Philosopher Nick Bostrom’s essay on the simulation hypothesis.

Anthropic’s Golden Gate Claude

A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

RECOMMENDED YUA EPISODES

Esther Perel on Artificial Intimacy

Talking With Animals... Using AI

Synthetic Humanity: AI & What’s At Stake

  continue reading

124 episodi

Artwork
iconCondividi
 
Manage episode 427092085 series 2503772
Contenuto fornito da Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

Frankenstein by Mary Shelley

A free, plain text version of the Shelley’s classic of gothic literature.

OpenAI’s GPT4o Demo

A video from OpenAI demonstrating GPT4o’s remarkable ability to mimic human sentience.

You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma.

What It’s Like to Be a Bat

Thomas Nagel’s essay on the nature of consciousness.

Are You Living in a Computer Simulation?

Philosopher Nick Bostrom’s essay on the simulation hypothesis.

Anthropic’s Golden Gate Claude

A blog post about Anthropic’s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

RECOMMENDED YUA EPISODES

Esther Perel on Artificial Intimacy

Talking With Animals... Using AI

Synthetic Humanity: AI & What’s At Stake

  continue reading

124 episodi

Все серии

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida