Artwork

Contenuto fornito da Jay Shah. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Jay Shah o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

P1 Adversarial robustness in Neural Networks, Quantization and working at DeepMind | David Stutz

1:32:28
 
Condividi
 

Manage episode 370883806 series 2859018
Contenuto fornito da Jay Shah. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Jay Shah o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Part-1 of my podcast with David Stutz. (Part-2: https://youtu.be/IumJcB7bE20) David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a Ph.D. student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there. Check out Rora: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-202300:00:00 Highlights and Sponsors 00:01:22 Intro 00:02:14 Interest in AI 00:12:26 Finding research interests 00:22:41 Robustness vs Generalization in deep neural networks 00:28:03 Generalization vs model performance trade-off 00:37:30 On-manifold adversarial examples for better generalization 00:48:20 Vision transformers 00:49:45 Confidence-calibrated adversarial training 00:59:25 Improving hardware architecture for deep neural networks 01:08:45 What's the tradeoff in quantization? 01:19:07 Amazing aspects of working at DeepMind 01:27:38 Learning the skills of Abstraction when collaborating David's Homepage: https://davidstutz.de/ And his blog: https://davidstutz.de/category/blog/ Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

  continue reading

91 episodi

Artwork
iconCondividi
 
Manage episode 370883806 series 2859018
Contenuto fornito da Jay Shah. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Jay Shah o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Part-1 of my podcast with David Stutz. (Part-2: https://youtu.be/IumJcB7bE20) David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a Ph.D. student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there. Check out Rora: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-202300:00:00 Highlights and Sponsors 00:01:22 Intro 00:02:14 Interest in AI 00:12:26 Finding research interests 00:22:41 Robustness vs Generalization in deep neural networks 00:28:03 Generalization vs model performance trade-off 00:37:30 On-manifold adversarial examples for better generalization 00:48:20 Vision transformers 00:49:45 Confidence-calibrated adversarial training 00:59:25 Improving hardware architecture for deep neural networks 01:08:45 What's the tradeoff in quantization? 01:19:07 Amazing aspects of working at DeepMind 01:27:38 Learning the skills of Abstraction when collaborating David's Homepage: https://davidstutz.de/ And his blog: https://davidstutz.de/category/blog/ Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

  continue reading

91 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida