Vai offline con l'app Player FM !
LW - Executable philosophy as a failed totalizing meta-worldview by jessicata
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 09, 2024 12:46 ()
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 438334808 series 3314709
(this is an expanded, edited version of an x.com post)
It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative.
So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital:
Two motivations of "executable philosophy" are as follows:
1. We need a philosophical analysis to be "effective" in Turing's sense: that is, the terms of the analysis must be useful in writing programs. We need ideas that we can compile and run; they must be "executable" like code is executable.
2. We need to produce adequate answers on a time scale of years or decades, not centuries. In the entrepreneurial sense of "good execution", we need a methodology we can execute on in a reasonable timeframe.
There is such a thing as common sense rationality, which says the world is round, you shouldn't play the lottery, etc. Formal notions like Bayesianism, VNM utility theory, and Solomonoff induction formalize something strongly related to this common sense rationality. Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications.
In the Sequences, Yudkowsky presents these formal ideas as the basis for a totalizing meta-worldview, of epistemic and instrumental rationality, and uses the meta-worldview to argue for his object-level worldview (which includes many-worlds, AGI foom, importance of AI alignment, etc.).
While one can get totalizing (meta-)worldviews from elsewhere (such as interdisciplinary academic studies), Yudkowsky's (meta-)worldview is relatively easy to pick up for analytically strong people (who tend towards STEM), and is effective ("correct" and "winning") relative to its simplicity.
Yudkowsky's source material and his own writing do not form a closed meta-worldview, however. There are open problems as to how to formalize and solve real problems. Many of the more technical sort are described in MIRI's technical agent foundations agenda.
These include questions about how to parse a physically realistic problem as a set of VNM lotteries ("decision theory"), how to use something like Bayesianism to handle uncertainty about mathematics ("logical uncertainty"), how to formalize realistic human values ("value loading"), and so on.
Whether or not the closure of this meta-worldview leads to creation of friendly AGI, it would certainly have practical value. It would allow real world decisions to be made by first formalizing them within a computational framework (related to Yudkowsky's notion of "executable philosophy"), whether or not the computation itself is tractable (with its tractable version being friendly AGI).
The practical strategy of MIRI as a technical research institute is to go meta on these open problems by recruiting analytically strong STEM people (especially mathematicians and computer scientists) to work on them, as part of the agent foundations agenda. I was one of these people. While we made some progress on these problems (such as with the Logical Induction paper), we didn't come close to completing the meta-worldview, let alone building friendly AGI.
With the Agent Foundations team at MIRI eliminated, MIRI's agent foundations agenda is now unambiguously a failed project. I had called MIRI technical research as likely to fail around 2017 with the increase in internal secrecy, but at thi...
2437 episodi
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 09, 2024 12:46 ()
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 438334808 series 3314709
(this is an expanded, edited version of an x.com post)
It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative.
So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital:
Two motivations of "executable philosophy" are as follows:
1. We need a philosophical analysis to be "effective" in Turing's sense: that is, the terms of the analysis must be useful in writing programs. We need ideas that we can compile and run; they must be "executable" like code is executable.
2. We need to produce adequate answers on a time scale of years or decades, not centuries. In the entrepreneurial sense of "good execution", we need a methodology we can execute on in a reasonable timeframe.
There is such a thing as common sense rationality, which says the world is round, you shouldn't play the lottery, etc. Formal notions like Bayesianism, VNM utility theory, and Solomonoff induction formalize something strongly related to this common sense rationality. Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications.
In the Sequences, Yudkowsky presents these formal ideas as the basis for a totalizing meta-worldview, of epistemic and instrumental rationality, and uses the meta-worldview to argue for his object-level worldview (which includes many-worlds, AGI foom, importance of AI alignment, etc.).
While one can get totalizing (meta-)worldviews from elsewhere (such as interdisciplinary academic studies), Yudkowsky's (meta-)worldview is relatively easy to pick up for analytically strong people (who tend towards STEM), and is effective ("correct" and "winning") relative to its simplicity.
Yudkowsky's source material and his own writing do not form a closed meta-worldview, however. There are open problems as to how to formalize and solve real problems. Many of the more technical sort are described in MIRI's technical agent foundations agenda.
These include questions about how to parse a physically realistic problem as a set of VNM lotteries ("decision theory"), how to use something like Bayesianism to handle uncertainty about mathematics ("logical uncertainty"), how to formalize realistic human values ("value loading"), and so on.
Whether or not the closure of this meta-worldview leads to creation of friendly AGI, it would certainly have practical value. It would allow real world decisions to be made by first formalizing them within a computational framework (related to Yudkowsky's notion of "executable philosophy"), whether or not the computation itself is tractable (with its tractable version being friendly AGI).
The practical strategy of MIRI as a technical research institute is to go meta on these open problems by recruiting analytically strong STEM people (especially mathematicians and computer scientists) to work on them, as part of the agent foundations agenda. I was one of these people. While we made some progress on these problems (such as with the Logical Induction paper), we didn't come close to completing the meta-worldview, let alone building friendly AGI.
With the Agent Foundations team at MIRI eliminated, MIRI's agent foundations agenda is now unambiguously a failed project. I had called MIRI technical research as likely to fail around 2017 with the increase in internal secrecy, but at thi...
2437 episodi
All episodes
×Benvenuto su Player FM!
Player FM ricerca sul web podcast di alta qualità che tu possa goderti adesso. È la migliore app di podcast e funziona su Android, iPhone e web. Registrati per sincronizzare le iscrizioni su tutti i tuoi dispositivi.