Artwork

Contenuto fornito da DataRobot. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da DataRobot o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

Bringing History and Foresight to Ethical AI - Meg Mitchell

1:03:18
 
Condividi
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on February 26, 2024 14:53 (3M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 321690866 series 2842356
Contenuto fornito da DataRobot. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da DataRobot o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

In this episode of More Intelligent Tomorrow, Meg talks to Michael Gilday about the opportunities in machine learning to create a more diverse, equitable, and inclusive future.
Dr. Margaret Mitchell (Meg) is a researcher in ethics-informed AI with a focus on natural language generation, computer vision, and other augmentative and assistive technologies.

Foresight is an indispensable tool for shaping and evaluating AI project outcomes. Instead of focusing on creating technology to improve something that already exists, a longer-term focus –one that is two, five, or ten years into the future–can help us understand what we should be working on today. It’s a fairly straightforward way of thinking, yet foresight is often brushed aside as incalculable.

Foresight also can present a liability issue. If you’re working on a technology that will be less discriminatory, for example, that means your technology right now is discriminatory. Fear of impending regulation and of misinterpretations that hamper development can cause a troubling lack of imagination within development teams.

Bringing in people who have a creative mindset or a different perspective can help technical teams see things in a more imaginative way. Science fiction writers, for example, are adept at bringing foresight into a project and help teams see how things might evolve over time. That, in turn, could help us be smarter about the kinds of development we do. But bringing people who are adept at foresight into a project, such as science fiction writers, creates an opportunity to think through how things might evolve over time. That, in turn, could help us be smarter about the kinds of development we do.

Similarly, historians can shed light on patterns of development over time. Instead of focusing on how rapidly technology is changing, they can offer a reflection on corresponding power dynamics and sociological changes that can also inform how we develop a technology.

A collaboration of humanities-oriented thinkers and science-oriented thinkers can help us think through the storyline of what a technology should be. There’s a need to focus not only on how well the model or system works in isolation but also how well it works in context.

“Understanding how people use a technology–and therefore understanding people– is not something computer scientists are always good at. It requires different skill sets, which makes collaboration with subject matter experts critical.”

To really understand what it means to have AI in our social contexts, we need social scientists, anthropologists, and historians. So, how do we bring a diversity of voices and experiences into these technological challenges and conversations?

“Now is a great time to focus our attention on the science of diversity and inclusion. We’re on a global scale we haven’t been able to see before. We have infinitely better access to different cultures and perspectives on differences and similarities like we’ve never had before.”

Listen to this episode of More Intelligent Tomorrow to learn about:

  • The culture of ethical behavior and the bottom-up-top-down approach with regulators and corporations
  • How no-code solutions are removing barriers in AI and machine learning work
  • Malicious actors vs. irresponsible ones and why ignorance is the biggest problem we face
  • Gender bias and progress bringing more women into in tech and STEM
  • How transparency can be prioritized over the obfuscation that is prevalent right now

  continue reading

69 episodi

Artwork
iconCondividi
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on February 26, 2024 14:53 (3M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 321690866 series 2842356
Contenuto fornito da DataRobot. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da DataRobot o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

In this episode of More Intelligent Tomorrow, Meg talks to Michael Gilday about the opportunities in machine learning to create a more diverse, equitable, and inclusive future.
Dr. Margaret Mitchell (Meg) is a researcher in ethics-informed AI with a focus on natural language generation, computer vision, and other augmentative and assistive technologies.

Foresight is an indispensable tool for shaping and evaluating AI project outcomes. Instead of focusing on creating technology to improve something that already exists, a longer-term focus –one that is two, five, or ten years into the future–can help us understand what we should be working on today. It’s a fairly straightforward way of thinking, yet foresight is often brushed aside as incalculable.

Foresight also can present a liability issue. If you’re working on a technology that will be less discriminatory, for example, that means your technology right now is discriminatory. Fear of impending regulation and of misinterpretations that hamper development can cause a troubling lack of imagination within development teams.

Bringing in people who have a creative mindset or a different perspective can help technical teams see things in a more imaginative way. Science fiction writers, for example, are adept at bringing foresight into a project and help teams see how things might evolve over time. That, in turn, could help us be smarter about the kinds of development we do. But bringing people who are adept at foresight into a project, such as science fiction writers, creates an opportunity to think through how things might evolve over time. That, in turn, could help us be smarter about the kinds of development we do.

Similarly, historians can shed light on patterns of development over time. Instead of focusing on how rapidly technology is changing, they can offer a reflection on corresponding power dynamics and sociological changes that can also inform how we develop a technology.

A collaboration of humanities-oriented thinkers and science-oriented thinkers can help us think through the storyline of what a technology should be. There’s a need to focus not only on how well the model or system works in isolation but also how well it works in context.

“Understanding how people use a technology–and therefore understanding people– is not something computer scientists are always good at. It requires different skill sets, which makes collaboration with subject matter experts critical.”

To really understand what it means to have AI in our social contexts, we need social scientists, anthropologists, and historians. So, how do we bring a diversity of voices and experiences into these technological challenges and conversations?

“Now is a great time to focus our attention on the science of diversity and inclusion. We’re on a global scale we haven’t been able to see before. We have infinitely better access to different cultures and perspectives on differences and similarities like we’ve never had before.”

Listen to this episode of More Intelligent Tomorrow to learn about:

  • The culture of ethical behavior and the bottom-up-top-down approach with regulators and corporations
  • How no-code solutions are removing barriers in AI and machine learning work
  • Malicious actors vs. irresponsible ones and why ignorance is the biggest problem we face
  • Gender bias and progress bringing more women into in tech and STEM
  • How transparency can be prioritized over the obfuscation that is prevalent right now

  continue reading

69 episodi

Tutti gli episodi

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida