Artwork

Contenuto fornito da information labs and Information labs. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da information labs and Information labs o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

17:05
 
Condividi
 

Manage episode 523574650 series 3480798
Contenuto fornito da information labs and Information labs. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da information labs and Information labs o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practice must focus on clear principles rather than fixed technical solutions, ensuring transparency helps prevent deception without creating confusion in a rapidly evolving technological environment.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

⏲️[02:31] Q2-What’s the difference between disclosing a chatbot and technically marking AI-generated media?

⏲️[06:27] Q3-What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

⏲️[10:00] Q4-If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

⏲️[13:11] Q5-Did you consult other stakeholders when developing your whitepaper analysis?

⏲️[16:45] Wrap-up & Outro

💭 Q1 - What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

🗣️ “Article 50 sets only broad transparency rules—so a strong Code of Practice is essential.”

💭 Q2 - What’s the difference between disclosing a chatbot and technically marking AI-generated media?

🗣️ “If there’s a risk of confusion, users must be clearly told they’re interacting with AI.”

💭 Q3 - What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

🗣️ “Too much transparency can mislead just as much as too little.”

💭 Q4 - If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

🗣️ “We should focus on principles, not chase technical solutions that will be outdated in months.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ “Transparency raises legal, technical, psychological, and even philosophical questions—information alone doesn’t guarantee real agency."

📌 About Our Guests

🎙️ Joan Barata | Faculdade de Direito - Católica no Porto

🌐 linkedin.com/in/joan-barata-a649876

Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Visiting professor at Faculdade de Direito - Católica no Porto and Senior Legal Fellow at The Future Free Speech project at Vanderbilt University. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

#AI #artificialintelligence #generativeAI

  continue reading

37 episodi

Artwork
iconCondividi
 
Manage episode 523574650 series 3480798
Contenuto fornito da information labs and Information labs. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da information labs and Information labs o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practice must focus on clear principles rather than fixed technical solutions, ensuring transparency helps prevent deception without creating confusion in a rapidly evolving technological environment.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

⏲️[02:31] Q2-What’s the difference between disclosing a chatbot and technically marking AI-generated media?

⏲️[06:27] Q3-What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

⏲️[10:00] Q4-If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

⏲️[13:11] Q5-Did you consult other stakeholders when developing your whitepaper analysis?

⏲️[16:45] Wrap-up & Outro

💭 Q1 - What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

🗣️ “Article 50 sets only broad transparency rules—so a strong Code of Practice is essential.”

💭 Q2 - What’s the difference between disclosing a chatbot and technically marking AI-generated media?

🗣️ “If there’s a risk of confusion, users must be clearly told they’re interacting with AI.”

💭 Q3 - What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

🗣️ “Too much transparency can mislead just as much as too little.”

💭 Q4 - If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

🗣️ “We should focus on principles, not chase technical solutions that will be outdated in months.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ “Transparency raises legal, technical, psychological, and even philosophical questions—information alone doesn’t guarantee real agency."

📌 About Our Guests

🎙️ Joan Barata | Faculdade de Direito - Católica no Porto

🌐 linkedin.com/in/joan-barata-a649876

Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Visiting professor at Faculdade de Direito - Católica no Porto and Senior Legal Fellow at The Future Free Speech project at Vanderbilt University. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

#AI #artificialintelligence #generativeAI

  continue reading

37 episodi

All episodes

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida

Ascolta questo spettacolo mentre esplori
Riproduci