more imgur updates
This commit is contained in:
parent
1f80f35820
commit
e51c179167
|
@ -130,11 +130,11 @@ One particular dynamic that I’ve seen some gung-ho AI risk people mention is t
|
||||||
|
|
||||||
So, in illustration, the overall balance could look something like:
|
So, in illustration, the overall balance could look something like:
|
||||||
|
|
||||||
<img src="https://i.imgur.com/ziJqSn9.png" class='.img-medium-center'>
|
<img src="https://images.nunosempere.com/blog/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk/6ba8263af80738f19258cfd36e07fe103ee9920d.png" class='.img-medium-center'>
|
||||||
|
|
||||||
Whereas the individual matchup could look something like:
|
Whereas the individual matchup could look something like:
|
||||||
|
|
||||||
<img src="https://i.imgur.com/thdBH3n.png" class='.img-medium-center'>
|
<img src="https://images.nunosempere.com/blog/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk/6504f6dc78d9518c7e266f44d3c5b10797313af3.png" class='.img-medium-center'>
|
||||||
|
|
||||||
And so you would expect the natural belief dynamics stemming from that type of matchup.
|
And so you would expect the natural belief dynamics stemming from that type of matchup.
|
||||||
|
|
||||||
|
@ -272,7 +272,7 @@ This is all.
|
||||||
|
|
||||||
# Acknowledgements
|
# Acknowledgements
|
||||||
|
|
||||||
<img src="https://i.imgur.com/hv8GEDS.jpg" class='img-frontpage-center'>
|
<img src="https://images.nunosempere.com/samotsvety/samotsvety-quri-logo.jpg" class='img-frontpage-center'>
|
||||||
|
|
||||||
I am grateful to the [Samotsvety](https://samotsvety.org/) forecasters that have discussed this topic with me, and to Ozzie Gooen for comments and review. The above post doesn't necessarily represent the views of other people at the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/), which nonetheless supports my research.
|
I am grateful to the [Samotsvety](https://samotsvety.org/) forecasters that have discussed this topic with me, and to Ozzie Gooen for comments and review. The above post doesn't necessarily represent the views of other people at the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/), which nonetheless supports my research.
|
||||||
|
|
||||||
|
|
|
@ -15,7 +15,7 @@ The dataset I’m using can be seen [here](https://docs.google.com/spreadsheets
|
||||||
|
|
||||||
I estimate the probability that a randomly chosen conjecture will be solved as follows:
|
I estimate the probability that a randomly chosen conjecture will be solved as follows:
|
||||||
|
|
||||||
<img src="https://i.imgur.com/M9jfZva.jpg" class='.img-medium-center'>
|
<img src="https://images.nunosempere.com/blog/2023/01/30/an-in-progress-experiment-to-test-how-laplace-s-rule-of/f12eb79ade3f66662ed56d37788d427548155944.png" class='.img-medium-center'>
|
||||||
|
|
||||||
That is, the probability that the conjecture will first be solved in the year _n_ is the probability given by Laplace conditional on it not having been solved any year before.
|
That is, the probability that the conjecture will first be solved in the year _n_ is the probability given by Laplace conditional on it not having been solved any year before.
|
||||||
|
|
||||||
|
@ -29,19 +29,19 @@ Using the above probabilities, we can, through sampling, estimate the number of
|
||||||
|
|
||||||
### For three years
|
### For three years
|
||||||
|
|
||||||
**<img src="https://i.imgur.com/0kK1I9Y.png" class='.img-medium-center'>**
|
**<img src="https://images.nunosempere.com/blog/2023/01/30/an-in-progress-experiment-to-test-how-laplace-s-rule-of/4dd8ee1e01b25ea2c31b46a0be1253eaa5915991.png" class='.img-medium-center'>**
|
||||||
|
|
||||||
If we calculate the 90% and the 98% confidence intervals, these are respectively (6 to 16) and (4 to 18) problems solved in the next three years.
|
If we calculate the 90% and the 98% confidence intervals, these are respectively (6 to 16) and (4 to 18) problems solved in the next three years.
|
||||||
|
|
||||||
### For five years
|
### For five years
|
||||||
|
|
||||||
**<img src="https://i.imgur.com/K4ES5A9.png" class='.img-medium-center'>**
|
**<img src="https://images.nunosempere.com/blog/2023/01/30/an-in-progress-experiment-to-test-how-laplace-s-rule-of/536ebf8c1400e1d3035042a8409b15f34b13095b.png" class='.img-medium-center'>**
|
||||||
|
|
||||||
If we calculate the 90% and the 98% confidence intervals, these are respectively (11 to 24) and (9 to 27) problems solved in the next five years.
|
If we calculate the 90% and the 98% confidence intervals, these are respectively (11 to 24) and (9 to 27) problems solved in the next five years.
|
||||||
|
|
||||||
### For ten years
|
### For ten years
|
||||||
|
|
||||||
**<img src="https://i.imgur.com/ZiFtIP5.png" class='.img-medium-center'>**
|
**<img src="https://images.nunosempere.com/blog/2023/01/30/an-in-progress-experiment-to-test-how-laplace-s-rule-of/0684cd87d78a4d4e1344051966ab41a9e47a683b.png" class='.img-medium-center'>**
|
||||||
|
|
||||||
If we calculate the 90% and the 98% confidence intervals, these are respectively (23 to 40) and (20 to 43) problems solved in the next five years.
|
If we calculate the 90% and the 98% confidence intervals, these are respectively (23 to 40) and (20 to 43) problems solved in the next five years.
|
||||||
|
|
||||||
|
@ -75,7 +75,7 @@ The reason why I didn’t do this myself is that step 2. would be fairly time in
|
||||||
|
|
||||||
## Acknowledgements
|
## Acknowledgements
|
||||||
|
|
||||||
**<img src="https://i.imgur.com/WUqgilk.png" class='img-frontpage-center'>**
|
**<img src="https://images.nunosempere.com/quri/logo.png" class='img-frontpage-center'>**
|
||||||
|
|
||||||
This is a project of the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/) (QURI). Thanks to Ozzie Gooen and Nics Olayres for giving comments and suggestions.
|
This is a project of the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/) (QURI). Thanks to Ozzie Gooen and Nics Olayres for giving comments and suggestions.
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@ In early 2022, the Effective Altruism movement was triumphant. Sam Bankman-Fried
|
||||||
|
|
||||||
Now the situation looks different. Samo Burja has this interesting book on [Great Founder Theory][0] , from which I’ve gotten the notion of an “expanding empire”. In an expanding empire, like a startup, there are new opportunities and land to conquer, and members can be rewarded with parts of the newly conquered land. The optimal strategy here is _unity_ . EA in 2022 was just that, a united social movement playing together against the cruelty of nature and history.
|
Now the situation looks different. Samo Burja has this interesting book on [Great Founder Theory][0] , from which I’ve gotten the notion of an “expanding empire”. In an expanding empire, like a startup, there are new opportunities and land to conquer, and members can be rewarded with parts of the newly conquered land. The optimal strategy here is _unity_ . EA in 2022 was just that, a united social movement playing together against the cruelty of nature and history.
|
||||||
|
|
||||||
![](https://i.imgur.com/3SfMuU1.jpg)
|
![](https://images.nunosempere.com/blog/2023/01/30/ea-no-longer-expanding-empire/ship.jpg)
|
||||||
*<br>Imagine the Spanish empire, without the empire.*
|
*<br>Imagine the Spanish empire, without the empire.*
|
||||||
|
|
||||||
My sense is that the tendency for EA in 2023 and going forward will be less like that. Funding is now more limited, not only because the FTX empire collapsed, but also because the stock market collapsed, which means that Open Philanthropy—now the main funder once again—also has less money. With funding drying, EA will now have to economize and prioritize between different causes. And with economizing in the background, internecine fights become more worth it, because the EA movement isn’t trying to grow the pie together, but rather each part will be trying to defend its share of the pie. Fewer shared offices will exist all over the place, fewer regrantors to fund moonshots. More frugality. So EA will become more like a bureaucracy and less like a startup. You get the idea.
|
My sense is that the tendency for EA in 2023 and going forward will be less like that. Funding is now more limited, not only because the FTX empire collapsed, but also because the stock market collapsed, which means that Open Philanthropy—now the main funder once again—also has less money. With funding drying, EA will now have to economize and prioritize between different causes. And with economizing in the background, internecine fights become more worth it, because the EA movement isn’t trying to grow the pie together, but rather each part will be trying to defend its share of the pie. Fewer shared offices will exist all over the place, fewer regrantors to fund moonshots. More frugality. So EA will become more like a bureaucracy and less like a startup. You get the idea.
|
||||||
|
|
Loading…
Reference in New Issue
Block a user