Artwork

Contenuto fornito da London Futurists. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da London Futurists o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Provably safe AGI, with Steve Omohundro

42:59
 
Condividi
 

Manage episode 400683173 series 3390521
Contenuto fornito da London Futurists. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da London Futurists o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?
Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.
Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.
Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.
Selected follow-ups:
Steve Omohundro: Innovative ideas for a better world
Metaculus forecast for the date of weak AGI
"The Basic AI Drives" (PDF, 2008)
TED Talk by Max Tegmark: How to Keep AI Under Control
Apple Secure Enclave
Meta Research: Teaching AI advanced mathematical reasoning
DeepMind AlphaGeometry
Microsoft Lean theorem prover
Terence Tao (Wikipedia)
NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)
The team at MIRI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

Capitoli

1. Provably safe AGI, with Steve Omohundro (00:00:00)

2. [Ad] Out-of-the-box insights from digital leaders (00:08:56)

3. (Cont.) Provably safe AGI, with Steve Omohundro (00:09:34)

103 episodi

Artwork
iconCondividi
 
Manage episode 400683173 series 3390521
Contenuto fornito da London Futurists. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da London Futurists o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?
Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.
Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.
Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.
Selected follow-ups:
Steve Omohundro: Innovative ideas for a better world
Metaculus forecast for the date of weak AGI
"The Basic AI Drives" (PDF, 2008)
TED Talk by Max Tegmark: How to Keep AI Under Control
Apple Secure Enclave
Meta Research: Teaching AI advanced mathematical reasoning
DeepMind AlphaGeometry
Microsoft Lean theorem prover
Terence Tao (Wikipedia)
NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)
The team at MIRI
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

Capitoli

1. Provably safe AGI, with Steve Omohundro (00:00:00)

2. [Ad] Out-of-the-box insights from digital leaders (00:08:56)

3. (Cont.) Provably safe AGI, with Steve Omohundro (00:09:34)

103 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida