Artwork

Contenuto fornito da Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Using LLMs to Evaluate Code

1:02:10
 
Condividi
 

Manage episode 509954461 series 1264075
Contenuto fornito da Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episodi

Artwork
iconCondividi
 
Manage episode 509954461 series 1264075
Contenuto fornito da Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episodi

ทุกตอน

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida

Ascolta questo spettacolo mentre esplori
Riproduci