Artwork

Contenuto fornito da Jay Shah. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Jay Shah o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Why are Transformer so effective in Large Language Models like ChatGPT

9:43
 
Condividi
 

Manage episode 359298739 series 2859018
Contenuto fornito da Jay Shah. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Jay Shah o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Understanding why and how transformers are so efficient in large language models nowadays such as #chatgpt and more.
Watch the full podcast with Dr. Surbhi Goel here: https://youtu.be/stB0cY_fffo
Find Dr. Goel on social media
Website: https://www.surbhigoel.com/
Linkedin: https://www.linkedin.com/in/surbhi-goel-5455b25a
Twitter: https://twitter.com/surbhigoel_?lang=en
Learning Theory Alliance: https://let-all.com/index.html
About the Host:
Jay is a Ph.D. student at Arizona State University.
Linkedin: https://www.linkedin.com/in/shahjay22/
Twitter: https://twitter.com/jaygshah22
Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.
Stay tuned for upcoming webinars!
***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahml
About the author: https://www.public.asu.edu/~jgshah1/

  continue reading

92 episodi

Artwork
iconCondividi
 
Manage episode 359298739 series 2859018
Contenuto fornito da Jay Shah. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Jay Shah o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Understanding why and how transformers are so efficient in large language models nowadays such as #chatgpt and more.
Watch the full podcast with Dr. Surbhi Goel here: https://youtu.be/stB0cY_fffo
Find Dr. Goel on social media
Website: https://www.surbhigoel.com/
Linkedin: https://www.linkedin.com/in/surbhi-goel-5455b25a
Twitter: https://twitter.com/surbhigoel_?lang=en
Learning Theory Alliance: https://let-all.com/index.html
About the Host:
Jay is a Ph.D. student at Arizona State University.
Linkedin: https://www.linkedin.com/in/shahjay22/
Twitter: https://twitter.com/jaygshah22
Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.
Stay tuned for upcoming webinars!
***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahml
About the author: https://www.public.asu.edu/~jgshah1/

  continue reading

92 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida