Artwork

Contenuto fornito da Daniel Filan. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Daniel Filan o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

2:17:24
 
Condividi
 

Manage episode 435988478 series 2844728
Contenuto fornito da Daniel Filan. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Daniel Filan o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his research into these questions.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/08/24/episode-35-peter-hase-llm-beliefs-easy-to-hard-generalization.html

Topics we discuss, and timestamps:

0:00:36 - NLP and interpretability

0:10:20 - Interpretability lessons

0:32:22 - Belief interpretability

1:00:12 - Localizing and editing models' beliefs

1:19:18 - Beliefs beyond language models

1:27:21 - Easy-to-hard generalization

1:47:16 - What do easy-to-hard results tell us?

1:57:33 - Easy-to-hard vs weak-to-strong

2:03:50 - Different notions of hardness

2:13:01 - Easy-to-hard vs weak-to-strong, round 2

2:15:39 - Following Peter's work

Peter on Twitter: https://x.com/peterbhase

Peter's papers:

Foundational Challenges in Assuring Alignment and Safety of Large Language Models: https://arxiv.org/abs/2404.09932

Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs: https://arxiv.org/abs/2111.13654

Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models: https://arxiv.org/abs/2301.04213

Are Language Models Rational? The Case of Coherence Norms and Belief Revision: https://arxiv.org/abs/2406.03442

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks: https://arxiv.org/abs/2401.06751

Other links:

Toy Models of Superposition: https://transformer-circuits.pub/2022/toy_model/index.html

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV): https://arxiv.org/abs/1711.11279

Locating and Editing Factual Associations in GPT (aka the ROME paper): https://arxiv.org/abs/2202.05262

Of nonlinearity and commutativity in BERT: https://arxiv.org/abs/2101.04547

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model: https://arxiv.org/abs/2306.03341

Editing a classifier by rewriting its prediction rules: https://arxiv.org/abs/2112.01008

Discovering Latent Knowledge Without Supervision (aka the Collin Burns CCS paper): https://arxiv.org/abs/2212.03827

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision: https://arxiv.org/abs/2312.09390

Concrete problems in AI safety: https://arxiv.org/abs/1606.06565

Rissanen Data Analysis: Examining Dataset Characteristics via Description Length: https://arxiv.org/abs/2103.03872

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

42 episodi

Artwork
iconCondividi
 
Manage episode 435988478 series 2844728
Contenuto fornito da Daniel Filan. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Daniel Filan o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his research into these questions.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/08/24/episode-35-peter-hase-llm-beliefs-easy-to-hard-generalization.html

Topics we discuss, and timestamps:

0:00:36 - NLP and interpretability

0:10:20 - Interpretability lessons

0:32:22 - Belief interpretability

1:00:12 - Localizing and editing models' beliefs

1:19:18 - Beliefs beyond language models

1:27:21 - Easy-to-hard generalization

1:47:16 - What do easy-to-hard results tell us?

1:57:33 - Easy-to-hard vs weak-to-strong

2:03:50 - Different notions of hardness

2:13:01 - Easy-to-hard vs weak-to-strong, round 2

2:15:39 - Following Peter's work

Peter on Twitter: https://x.com/peterbhase

Peter's papers:

Foundational Challenges in Assuring Alignment and Safety of Large Language Models: https://arxiv.org/abs/2404.09932

Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs: https://arxiv.org/abs/2111.13654

Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models: https://arxiv.org/abs/2301.04213

Are Language Models Rational? The Case of Coherence Norms and Belief Revision: https://arxiv.org/abs/2406.03442

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks: https://arxiv.org/abs/2401.06751

Other links:

Toy Models of Superposition: https://transformer-circuits.pub/2022/toy_model/index.html

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV): https://arxiv.org/abs/1711.11279

Locating and Editing Factual Associations in GPT (aka the ROME paper): https://arxiv.org/abs/2202.05262

Of nonlinearity and commutativity in BERT: https://arxiv.org/abs/2101.04547

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model: https://arxiv.org/abs/2306.03341

Editing a classifier by rewriting its prediction rules: https://arxiv.org/abs/2112.01008

Discovering Latent Knowledge Without Supervision (aka the Collin Burns CCS paper): https://arxiv.org/abs/2212.03827

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision: https://arxiv.org/abs/2312.09390

Concrete problems in AI safety: https://arxiv.org/abs/1606.06565

Rissanen Data Analysis: Examining Dataset Characteristics via Description Length: https://arxiv.org/abs/2103.03872

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

42 episodi

כל הפרקים

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida