Artwork

Contenuto fornito da Daniel Filan. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Daniel Filan o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

17 - Training for Very High Reliability with Daniel Ziegler

1:00:59
 
Condividi
 

Manage episode 338517759 series 2844728
Contenuto fornito da Daniel Filan. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Daniel Filan o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.

Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).

Topics we discuss, and timestamps:

- 00:00:40 - Summary of the paper

- 00:02:23 - Alignment as scalable oversight and catastrophe minimization

- 00:08:06 - Novel contribtions

- 00:14:20 - Evaluating adversarial robustness

- 00:20:26 - Adversary construction

- 00:35:14 - The task

- 00:38:23 - Fanfiction

- 00:42:15 - Estimators to reduce labelling burden

- 00:45:39 - Future work

- 00:50:12 - About Redwood Research

The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html

Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ

Research we discuss:

- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663

- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment

- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

  continue reading

39 episodi

Artwork
iconCondividi
 
Manage episode 338517759 series 2844728
Contenuto fornito da Daniel Filan. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Daniel Filan o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.

Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).

Topics we discuss, and timestamps:

- 00:00:40 - Summary of the paper

- 00:02:23 - Alignment as scalable oversight and catastrophe minimization

- 00:08:06 - Novel contribtions

- 00:14:20 - Evaluating adversarial robustness

- 00:20:26 - Adversary construction

- 00:35:14 - The task

- 00:38:23 - Fanfiction

- 00:42:15 - Estimators to reduce labelling burden

- 00:45:39 - Future work

- 00:50:12 - About Redwood Research

The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html

Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ

Research we discuss:

- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663

- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment

- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

  continue reading

39 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida