Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
Vai offline con l'app Player FM !
Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424087977 series 3498845
Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown that just a single turn of arguments in this format is not helpful to humans. However, as debate settings are characterized by a back-and-forth dialogue, we follow up on previous results to test whether adding a second round of counter-arguments is helpful to humans. We find that, regardless of whether they have access to arguments or not, humans perform similarly on our task. These findings suggest that, in the case of answering reading comprehension questions, debate is not a helpful format.
Source:
https://arxiv.org/abs/2210.10860
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Capitoli
1. Two turn debate (00:00:00)
2. Abstract (00:00:17)
3. 1 Introduction (00:01:26)
4. 2 Counter-Argument Writing Protocol (00:04:24)
5. 2.1 Multi-Turn Writing Task (00:04:28)
6. 2.2 Multi-Turn Judging Protocols (00:08:32)
7. 3 Results (00:11:45)
8. 4 Conclusion (00:15:51)
85 episodi
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424087977 series 3498845
Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown that just a single turn of arguments in this format is not helpful to humans. However, as debate settings are characterized by a back-and-forth dialogue, we follow up on previous results to test whether adding a second round of counter-arguments is helpful to humans. We find that, regardless of whether they have access to arguments or not, humans perform similarly on our task. These findings suggest that, in the case of answering reading comprehension questions, debate is not a helpful format.
Source:
https://arxiv.org/abs/2210.10860
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Capitoli
1. Two turn debate (00:00:00)
2. Abstract (00:00:17)
3. 1 Introduction (00:01:26)
4. 2 Counter-Argument Writing Protocol (00:04:24)
5. 2.1 Multi-Turn Writing Task (00:04:28)
6. 2.2 Multi-Turn Judging Protocols (00:08:32)
7. 3 Results (00:11:45)
8. 4 Conclusion (00:15:51)
85 episodi
Tutti gli episodi
×1 Constitutional AI Harmlessness from AI Feedback 1:01:49
1 Intro to Brain-Like-AGI Safety 1:02:10
1 Eliciting Latent Knowledge 1:00:27
Benvenuto su Player FM!
Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.