Artwork

Contenuto fornito da The Nonlinear Fund. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Nonlinear Fund o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Player FM - App Podcast
Vai offline con l'app Player FM !

LW - An AI Race With China Can Be Better Than Not Racing by niplav

16:49
 
Condividi
 

Manage episode 426831749 series 3337129
Contenuto fornito da The Nonlinear Fund. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Nonlinear Fund o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An AI Race With China Can Be Better Than Not Racing, published by niplav on July 2, 2024 on LessWrong. Frustrated by all your bad takes, I write a Monte-Carlo analysis of whether a transformative-AI-race between the PRC and the USA would be good. To my surprise, I find that it is better than not racing. Advocating for an international project to build TAI instead of racing turns out to be good if the probability of such advocacy succeeding is 20%. A common scheme for a conversation about pausing the development of transformative AI goes like this: Abdullah: "I think we should pause the development of TAI, because if we don't it seems plausible that humanity will be disempowered by by advanced AI systems." Benjamin: "Ah, if by "we" you refer to the United States (and and its allies, which probably don't stand a chance on their own to develop TAI), then the current geopolitical rival of the US, namely the PRC, will achieve TAI first. That would be bad." Abdullah: "I don't see how the US getting TAI first changes anything about the fact that we don't know how to align superintelligent AI systems - I'd rather not race to be the first person to kill everyone." Benjamin: "Ah, so now you're retreating back into your cozy little motte: Earlier you said that "it seems plausible that humanity will be disempowered", now you're acting like doom and gloom is certain. You don't seem to be able to make up your mind about how risky you think the whole enterprise is, and I have very concrete geopolitical enemies at my (semiconductor manufacturer's) doorstep that I have to worry about. Come back with better arguments." This dynamic is a bit frustrating. Here's how I'd like Abdullah to respond: Abdullah: "You're right, you're right. I was insufficiently precise in my statements, and I apologize for that. Instead, let us manifest the dream of the great philosopher: Calculemus! At a basic level, we want to estimate how much worse (or, perhaps, better) it would be for the United States to completely cede the race for TAI to the PRC. I will exclude other countries as contenders in the scramble for TAI, since I want to keep this analysis simple, but that doesn't mean that I don't think they matter. (Although, honestly, the list of serious contenders is pretty short.) For this, we have to estimate multiple quantities: 1. In worlds in which the US and PRC race for TAI: 1. The time until the US/PRC builds TAI. 2. The probability of extinction due to TAI, if the US is in the lead. 3. The probability of extinction due to TAI, if the PRC is in the lead. 4. The value of the worlds in which the US builds aligned TAI first. 5. The value of the worlds in which the PRC builds aligned TAI first. 2. In worlds where the US tries to convince other countries (including the PRC) to not build TAI, potentially including force, and still tries to prevent TAI-induced disempowerment by doing alignment-research and sharing alignment-favoring research results: 1. The time until the PRC builds TAI. 2. The probability of extinction caused by TAI. 3. The value of worlds in which the PRC builds aligned TAI. 3. The value of worlds where extinction occurs (which I'll fix at 0). 4. As a reference point the value of hypothetical worlds in which there is a multinational exclusive AGI consortium that builds TAI first, without any time pressure, for which I'll fix the mean value at 1. To properly quantify uncertainty, I'll use the Monte-Carlo estimation library squigglepy (no relation to any office supplies or internals of neural networks). We start, as usual, with housekeeping: As already said, we fix the value of extinction at 0, and the value of a multinational AGI consortium-led TAI at 1 (I'll just call the consortium "MAGIC", from here on). That is not to say that the MAGIC-led TAI future is the best possible TAI future...
  continue reading

1702 episodi

Artwork
iconCondividi
 
Manage episode 426831749 series 3337129
Contenuto fornito da The Nonlinear Fund. Tutti i contenuti dei podcast, inclusi episodi, grafica e descrizioni dei podcast, vengono caricati e forniti direttamente da The Nonlinear Fund o dal partner della piattaforma podcast. Se ritieni che qualcuno stia utilizzando la tua opera protetta da copyright senza la tua autorizzazione, puoi seguire la procedura descritta qui https://it.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An AI Race With China Can Be Better Than Not Racing, published by niplav on July 2, 2024 on LessWrong. Frustrated by all your bad takes, I write a Monte-Carlo analysis of whether a transformative-AI-race between the PRC and the USA would be good. To my surprise, I find that it is better than not racing. Advocating for an international project to build TAI instead of racing turns out to be good if the probability of such advocacy succeeding is 20%. A common scheme for a conversation about pausing the development of transformative AI goes like this: Abdullah: "I think we should pause the development of TAI, because if we don't it seems plausible that humanity will be disempowered by by advanced AI systems." Benjamin: "Ah, if by "we" you refer to the United States (and and its allies, which probably don't stand a chance on their own to develop TAI), then the current geopolitical rival of the US, namely the PRC, will achieve TAI first. That would be bad." Abdullah: "I don't see how the US getting TAI first changes anything about the fact that we don't know how to align superintelligent AI systems - I'd rather not race to be the first person to kill everyone." Benjamin: "Ah, so now you're retreating back into your cozy little motte: Earlier you said that "it seems plausible that humanity will be disempowered", now you're acting like doom and gloom is certain. You don't seem to be able to make up your mind about how risky you think the whole enterprise is, and I have very concrete geopolitical enemies at my (semiconductor manufacturer's) doorstep that I have to worry about. Come back with better arguments." This dynamic is a bit frustrating. Here's how I'd like Abdullah to respond: Abdullah: "You're right, you're right. I was insufficiently precise in my statements, and I apologize for that. Instead, let us manifest the dream of the great philosopher: Calculemus! At a basic level, we want to estimate how much worse (or, perhaps, better) it would be for the United States to completely cede the race for TAI to the PRC. I will exclude other countries as contenders in the scramble for TAI, since I want to keep this analysis simple, but that doesn't mean that I don't think they matter. (Although, honestly, the list of serious contenders is pretty short.) For this, we have to estimate multiple quantities: 1. In worlds in which the US and PRC race for TAI: 1. The time until the US/PRC builds TAI. 2. The probability of extinction due to TAI, if the US is in the lead. 3. The probability of extinction due to TAI, if the PRC is in the lead. 4. The value of the worlds in which the US builds aligned TAI first. 5. The value of the worlds in which the PRC builds aligned TAI first. 2. In worlds where the US tries to convince other countries (including the PRC) to not build TAI, potentially including force, and still tries to prevent TAI-induced disempowerment by doing alignment-research and sharing alignment-favoring research results: 1. The time until the PRC builds TAI. 2. The probability of extinction caused by TAI. 3. The value of worlds in which the PRC builds aligned TAI. 3. The value of worlds where extinction occurs (which I'll fix at 0). 4. As a reference point the value of hypothetical worlds in which there is a multinational exclusive AGI consortium that builds TAI first, without any time pressure, for which I'll fix the mean value at 1. To properly quantify uncertainty, I'll use the Monte-Carlo estimation library squigglepy (no relation to any office supplies or internals of neural networks). We start, as usual, with housekeeping: As already said, we fix the value of extinction at 0, and the value of a multinational AGI consortium-led TAI at 1 (I'll just call the consortium "MAGIC", from here on). That is not to say that the MAGIC-led TAI future is the best possible TAI future...
  continue reading

1702 episodi

Alle episoder

×
 
Loading …

Benvenuto su Player FM!

Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.

 

Guida rapida