tweak: add general changes.

This commit is contained in:
Nuno Sempere 2023-02-11 09:52:45 +00:00
parent c3a6f21d4d
commit 87232aef3f
4 changed files with 84 additions and 1 deletions

View File

@ -1,4 +1,4 @@
<form method="post" action="https://listmonk.nunosempere.com/subscription/form" class="listmonk-form">
<form method="post" action="https://list.nunosempere.com/subscription/form" class="listmonk-form">
<div>
<h3>Subscribe</h3>
<input type="hidden" name="nonce" />

View File

@ -0,0 +1,79 @@
Update to Samotsvety AGI timelines
==============
Previously: [Samotsvety's AI risk forecasts](https://samotsvety.org/blog/2022/09/09/samotsvety-s-ai-risk-forecasts/).
Our colleagues at [Epoch](https://epochai.org/) recently asked us to update our AI timelines estimate for their upcoming literature review on TAI timelines. We met on 2023-01-21 to discuss our predictions about when advanced AI systems will arrive.
# Forecasts:
## Definition of AGI
We used the following definition to determine the “moment at which AGI is considered to have arrived,” building on [this Metaculus question](https://www.metaculus.com/questions/11861/when-will-ai-pass-a-difficult-turing-test/):
> The moment that a system capable of passing the adversarial Turing test against a top-5%[^1] human who has access to experts on various topics is developed.
More concretely:
> A Turing test is said to be “adversarial” if the human judges make a good-faith attempt to unmask the AI as an impostor, and the human confederates make a good-faith attempt to demonstrate that they are humans.
> An AI is said to “pass” a Turing test if at least half of judges rated the AI as more human than at least third of the human confederates.
This definition of AGI is not unproblematic, e.g., [its possible that AGI could be unmasked long after its economic value and capabilities are very high](https://nunosempere.com/blog/2023/01/21/there-will-always-be-a-voigt-kampff-test/). We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.
## Individual forecasts
| | P(AGI by 2030) | P(AGI by 2050) | P(AGI by 2100) | P(AGI by this year) = 10% | P(AGI by this year) = 50% | P(AGI by this year) = 90% |
|----|----------------|----------------|----------------|---------------------------|---------------------------|---------------------------|
| F1 | 0.39 | 0.75 | 0.78 | 2028 | 2034 | N/A [^2] |
| F3 | 0.28 | 0.7 | 0.87 | 2027 | 2039 | 2120 |
| F4 | 0.26 | 0.58 | 0.93 | 2025 | 2039 | 2088 |
| F5 | 0.35 | 0.73 | 0.91 | 2025 | 2037 | 2075 |
| F6 | 0.4 | 0.65 | 0.8 | 2025 | 2035 | N/A[^3] |
| F7 | 0.33 | 0.65 | 0.8 | 2026 | 2037 | 2250 |
| F8 | 0.2 | 0.5 | 0.7 | 2026 | 2050 | 2200 |
| F9 | 0.23 | 0.44 | 0.67 | 2026 | 2060 | 2250 |
## Aggregate
| | P(AGI by 2030) [^4] | P(AGI by 2050) | P(AGI by 2100) | P(AGI by this year) = 10% | P(AGI by this year) = 50% | P(AGI by this year) = 90% |
|--------------|-------------------|----------------|----------------|---------------------------|---------------------------|---------------------------|
| mean: | 0.31 | 0.63 | 0.81 | 2026 | 2041 | 2164 |
| stdev: | 0.07 | 0.11 | 0.09 | 1.07 | 8.99 | 79.65 |
| | | | | | | |
| 50% CI: | [0.26, 0.35] | [0.55, 0.70] | [0.74, 0.87] | [2025.3, 2026.7] | [2035, 2047] | [2110, 2218] |
| 80% CI: | [0.21, 0.40] | [0.48, 0.77] | [0.69, 0.93] | [2024.6, 2027.4] | [2030, 2053] | [2062, 2266] |
| 95% CI[^5]: | [0.16, 0.45] | [0.41, 0.84] | [0.62, 0.99] | [2023.9, 2028.1] | [2024, 2059] | [2008, 2320] |
| | | | | | | |
| geomean: | 0.30 | 0.62 | 0.80 | 2026.00 | 2041 | 2163 |
| geo odds [^6]: | 0.30 | 0.63 | 0.82 | | | |
# Epistemic status:
* For Samotsvety track-record see: [https://samotsvety.org/track-record/](https://samotsvety.org/track-record/)
* Note that this track record comes mostly from questions about geopolitics and technology that resolve within 12 months.
* Most forecasters have at least read Joe Carlsmiths report on AI x-risk, Is “[Power-Seeking AI an Existential Risk?](https://arxiv.org/abs/2206.13353)”. Those who are short on time may have just skimmed the report and/or watched the presentation. We discussed the report section by section over the course of a few weekly meetings.
* Note also that there might be selection effects at the level of which forecasters chose to participate in this exercise; for example, Samotsvety forecasters who view AI as an important/interesting/etc. topic could have self-selected into the discussion.
* (Though, the set of forecasters who participated this time and [participated last time](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts) is very similar.)
# Update from our previous estimate
The last time we [publicly elicited a similar probability](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts#FTX_Foundation_questions) from our forecasters, we were at 32% that AGI would be developed in the next 20 years (so by late 2042); and at 73% that it would be developed by 2100. These are a bit lower than our current forecasts. The changes since then can be attributed to
* We have gotten more time to think about the topic, and work through considerations and counter-considerations, e.g., the extent to which we should fear selection effects in the types of arguments to which we are exposed.
* Some of our forecasters still give substantial weight to more skeptical probabilities coming from semi-informative priors, from Laplaces law, etc. But overall, we have also moved into being guided by more direct considerations: thinking about how AI progress will look year-by-year, what is the probability of a slowdown, what is the probability of a China/Taiwan war, etc.
* We have also spent significant time interacting with state of the art systems. We have seen other people producing new and exciting applications on top of these models. Just recently, Microsoft was [in talks](https://www.reuters.com/article/microsoft-openai-funding-idTRNIKBN2TP05G) with OpenAI to invest renewed billions. This all gives us another round of information.
# Acknowledgments
Thanks to Aaron Ho, Nuño Sempere, Greg Justice, Pablo Stafforini, Vidur Kapur, Misha Yagudin, Jared Leibowich, Tolga Bilge, Jonathan Mann, and Eli Lifland. for contributing to the discussion and/or submitting forecasts.
[^1]: A person who answers a range of general and specific knowledge questions better than 95% of the general population.
[^2]: P(AGI by ∞) is between 80% and 90% due to non-recoverable catastrophic risks, though this is very uncertain.
[^3]: P(AGI by ∞) is between 80% and 90% due to x-risks other than AI misalignment. They note that P(x-risks) goes up with more advanced AI systems being available and used (e.g., used in nuclear command & control).
[^4]: This depends significantly on whether China will blockade or invade Taiwan in a relevant timeframe. We think probability conditional on no Cross-Strait troubles is roughly 20% more likely, but this is highly uncertain. See also [http://metaforecast.org/?query=taiwan](http://metaforecast.org/?query=taiwan).
[^5]: Just mean ± 1.96 stdev.
[^6]: “[When pooling forecasts, use the geometric mean of odds](https://forum.effectivealtruism.org/posts/sMjcjnnpoAQCcedL2/when-pooling-forecasts-use-the-geometric-mean-of-odds)."

Binary file not shown.

View File

@ -13,6 +13,10 @@ https://samotsvety.org/blog/2022/09/09/samotsvety-s-ai-risk-forecasts/
https://samotsvety.org/blog/2022/10/
https://samotsvety.org/blog/2022/10/03/
https://samotsvety.org/blog/2022/10/03/samotsvety-nuclear-risk-update-october-2022/
https://samotsvety.org/blog/2023/
https://samotsvety.org/blog/2023/01/
https://samotsvety.org/blog/2023/01/24/
https://samotsvety.org/blog/2023/01/24/update-to-samotsvety-agi-timelines/
https://samotsvety.org/media-mentions/
https://samotsvety.org/projects/
https://samotsvety.org/track-record/