Artwork

Contenuto fornito da EA Forum Team. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da EA Forum Team o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

“Where I Am Donating in 2024” by MichaelDickens

1:51:48
 
Condividi
 

Manage episode 454325997 series 3281452
Contenuto fornito da EA Forum Team. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da EA Forum Team o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Summary

It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things.

I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions.

Within x-risk:

  • AI is the most important source of risk.
  • There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising.
  • Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development.

In the rest of this post, I will explain:

  1. Why I prioritize x-risk over animal-focused [...]

---

Outline:

(00:04) Summary

(01:30) I dont like donating to x-risk

(03:56) Cause prioritization

(04:00) S-risk research and animal-focused longtermism

(05:52) X-risk vs. global priorities research

(07:01) Prioritization within x-risk

(08:08) AI safety technical research vs. policy

(11:36) Quantitative model on research vs. policy

(14:20) Man versus man conflicts within AI policy

(15:13) Parallel safety/capabilities vs. slowing AI

(22:56) Freedom vs. regulation

(24:24) Slow nuanced regulation vs. fast coarse regulation

(27:02) Working with vs. against AI companies

(32:49) Political diplomacy vs. advocacy

(33:38) Conflicts that arent man vs. man but nonetheless require an answer

(33:55) Pause vs. Responsible Scaling Policy (RSP)

(35:28) Policy research vs. policy advocacy

(36:42) Advocacy directed at policy-makers vs. the general public

(37:32) Organizations

(39:36) Important disclaimers

(40:56) AI Policy Institute

(42:03) AI Safety and Governance Fund

(43:29) AI Standards Lab

(43:59) Campaign for AI Safety

(44:30) Centre for Enabling EA Learning and Research (CEEALAR)

(45:13) Center for AI Policy

(47:27) Center for AI Safety

(49:06) Center for Human-Compatible AI

(49:32) Center for Long-Term Resilience

(55:52) Center for Security and Emerging Technology (CSET)

(57:33) Centre for Long-Term Policy

(58:12) Centre for the Governance of AI

(59:07) CivAI

(01:00:05) Control AI

(01:02:08) Existential Risk Observatory

(01:03:33) Future of Life Institute (FLI)

(01:03:50) Future Society

(01:06:27) Horizon Institute for Public Service

(01:09:36) Institute for AI Policy and Strategy

(01:11:00) Lightcone Infrastructure

(01:12:30) Machine Intelligence Research Institute (MIRI)

(01:15:22) Manifund

(01:16:28) Model Evaluation and Threat Research (METR)

(01:17:45) Palisade Research

(01:19:10) PauseAI Global

(01:21:59) PauseAI US

(01:23:09) Sentinel rapid emergency response team

(01:24:52) Simon Institute for Longterm Governance

(01:25:44) Stop AI

(01:27:42) Where Im donating

(01:28:57) Prioritization within my top five

(01:32:17) Where Im donating (this is the section in which I actually say where Im donating)

The original text contained 58 footnotes which were omitted from this narration.

The original text contained 1 image which was described by AI.

---

First published:
November 19th, 2024

Source:
https://forum.effectivealtruism.org/posts/jAfhxWSzsw4pLypRt/where-i-am-donating-in-2024

---

Narrated by TYPE III AUDIO.

---

Images from the article:

undefined

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  continue reading

256 episodi

Artwork
iconCondividi
 
Manage episode 454325997 series 3281452
Contenuto fornito da EA Forum Team. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da EA Forum Team o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.

Summary

It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things.

I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions.

Within x-risk:

  • AI is the most important source of risk.
  • There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising.
  • Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development.

In the rest of this post, I will explain:

  1. Why I prioritize x-risk over animal-focused [...]

---

Outline:

(00:04) Summary

(01:30) I dont like donating to x-risk

(03:56) Cause prioritization

(04:00) S-risk research and animal-focused longtermism

(05:52) X-risk vs. global priorities research

(07:01) Prioritization within x-risk

(08:08) AI safety technical research vs. policy

(11:36) Quantitative model on research vs. policy

(14:20) Man versus man conflicts within AI policy

(15:13) Parallel safety/capabilities vs. slowing AI

(22:56) Freedom vs. regulation

(24:24) Slow nuanced regulation vs. fast coarse regulation

(27:02) Working with vs. against AI companies

(32:49) Political diplomacy vs. advocacy

(33:38) Conflicts that arent man vs. man but nonetheless require an answer

(33:55) Pause vs. Responsible Scaling Policy (RSP)

(35:28) Policy research vs. policy advocacy

(36:42) Advocacy directed at policy-makers vs. the general public

(37:32) Organizations

(39:36) Important disclaimers

(40:56) AI Policy Institute

(42:03) AI Safety and Governance Fund

(43:29) AI Standards Lab

(43:59) Campaign for AI Safety

(44:30) Centre for Enabling EA Learning and Research (CEEALAR)

(45:13) Center for AI Policy

(47:27) Center for AI Safety

(49:06) Center for Human-Compatible AI

(49:32) Center for Long-Term Resilience

(55:52) Center for Security and Emerging Technology (CSET)

(57:33) Centre for Long-Term Policy

(58:12) Centre for the Governance of AI

(59:07) CivAI

(01:00:05) Control AI

(01:02:08) Existential Risk Observatory

(01:03:33) Future of Life Institute (FLI)

(01:03:50) Future Society

(01:06:27) Horizon Institute for Public Service

(01:09:36) Institute for AI Policy and Strategy

(01:11:00) Lightcone Infrastructure

(01:12:30) Machine Intelligence Research Institute (MIRI)

(01:15:22) Manifund

(01:16:28) Model Evaluation and Threat Research (METR)

(01:17:45) Palisade Research

(01:19:10) PauseAI Global

(01:21:59) PauseAI US

(01:23:09) Sentinel rapid emergency response team

(01:24:52) Simon Institute for Longterm Governance

(01:25:44) Stop AI

(01:27:42) Where Im donating

(01:28:57) Prioritization within my top five

(01:32:17) Where Im donating (this is the section in which I actually say where Im donating)

The original text contained 58 footnotes which were omitted from this narration.

The original text contained 1 image which was described by AI.

---

First published:
November 19th, 2024

Source:
https://forum.effectivealtruism.org/posts/jAfhxWSzsw4pLypRt/where-i-am-donating-in-2024

---

Narrated by TYPE III AUDIO.

---

Images from the article:

undefined

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  continue reading

256 episodi

所有剧集

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida

Ascolta questo spettacolo mentre esplori
Riproduci