Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
Vai offline con l'app Player FM !
Deep Double Descent
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424087967 series 3498845
We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior appears to be fairly universal, we don’t yet fully understand why it happens, and view further study of this phenomenon as an important research direction.
Source:
https://openai.com/research/deep-double-descent
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Capitoli
1. Deep Double Descent (00:00:00)
2. Model-wise double descent (00:02:28)
3. Sample-wise non-monotonicity (00:04:39)
4. Epoch-wise double descent (00:06:14)
85 episodi
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424087967 series 3498845
We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior appears to be fairly universal, we don’t yet fully understand why it happens, and view further study of this phenomenon as an important research direction.
Source:
https://openai.com/research/deep-double-descent
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Capitoli
1. Deep Double Descent (00:00:00)
2. Model-wise double descent (00:02:28)
3. Sample-wise non-monotonicity (00:04:39)
4. Epoch-wise double descent (00:06:14)
85 episodi
Tutti gli episodi
×1 Constitutional AI Harmlessness from AI Feedback 1:01:49
1 Intro to Brain-Like-AGI Safety 1:02:10
1 Eliciting Latent Knowledge 1:00:27
Benvenuto su Player FM!
Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.