Artwork

Contenuto fornito da Michael Berk. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Michael Berk o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Beyond Intelligence: GPT-5, Explainability and the Ethics of AI Reasoning (E.24)

42:11
 
Condividi
 

Manage episode 515201604 series 3646654
Contenuto fornito da Michael Berk. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Michael Berk o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

What happens when AI stops generating answers and starts deciding what’s true?

In this episode of Free Form AI, Michael Berk and Ben Wilson dive into GPT-5’s growing role as an interpreter of information — not just generating text, but analyzing news, assessing credibility, and shaping how we understand truth itself.

They unpack how reasoning capabilities, source reliability, and human feedback intersect to build, or break trust in AI systems. The conversation also examines the ethical stakes of explainability, the dangers of “sycophantic” AI behavior and the future of intelligence in a market-driven ecosystem.

Tune in to Episode 24 for a wide-ranging conversation about:
• How GPT-5’s reasoning is redefining “understanding” in AI
• Why explainability is critical for trust and transparency
• The risks of AI echo chambers and feedback bias
• The role of human judgment in AI alignment and evaluation
• What it means for machines to become arbiters of truth

Whether you build, study, or rely on AI systems, this episode will leave you questioning how far we’re willing to let our models think for us.

  continue reading

26 episodi

Artwork
iconCondividi
 
Manage episode 515201604 series 3646654
Contenuto fornito da Michael Berk. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da Michael Berk o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

What happens when AI stops generating answers and starts deciding what’s true?

In this episode of Free Form AI, Michael Berk and Ben Wilson dive into GPT-5’s growing role as an interpreter of information — not just generating text, but analyzing news, assessing credibility, and shaping how we understand truth itself.

They unpack how reasoning capabilities, source reliability, and human feedback intersect to build, or break trust in AI systems. The conversation also examines the ethical stakes of explainability, the dangers of “sycophantic” AI behavior and the future of intelligence in a market-driven ecosystem.

Tune in to Episode 24 for a wide-ranging conversation about:
• How GPT-5’s reasoning is redefining “understanding” in AI
• Why explainability is critical for trust and transparency
• The risks of AI echo chambers and feedback bias
• The role of human judgment in AI alignment and evaluation
• What it means for machines to become arbiters of truth

Whether you build, study, or rely on AI systems, this episode will leave you questioning how far we’re willing to let our models think for us.

  continue reading

26 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida

Ascolta questo spettacolo mentre esplori
Riproduci