savepoint
This commit is contained in:
parent
aa23baa293
commit
8d9b82f1a3
|
@ -8,7 +8,7 @@
|
|||
<link rel="shortcut icon" href="/favicon.ico" type="image/vnd.microsoft.icon">
|
||||
% if(test -f $sitedir/_werc/pub/style.css)
|
||||
% echo ' <link rel="stylesheet" href="/_werc/pub/style.css" type="text/css" media="screen" title="default">'
|
||||
|
||||
<link rel="alternate" type="application/rss+xml" title="RSS for Measure is Unceasing" href="/blog/index.rss" />
|
||||
<meta charset="UTF-8">
|
||||
% # Legacy charset declaration for backards compatibility with non-html5 browsers.
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
|
||||
|
|
|
@ -87,7 +87,9 @@ I'm not sure that gratuitous incompatibility is so bad if it leads to utilities
|
|||
- [ ] Or, generally find a minimalistic kernel that could use some simple coreutils.
|
||||
- [ ] Add man pages?
|
||||
- [ ] Pitch to lwn.net as an article?
|
||||
- [ ] Come back to writting these in zig.
|
||||
- [ ] Come back to writting these in zig
|
||||
- [ ] ...
|
||||
|
||||
|
||||
## Done or discarded
|
||||
|
||||
|
@ -113,3 +115,10 @@ I'm not sure that gratuitous incompatibility is so bad if it leads to utilities
|
|||
- [ ] ~~Could use zig? => Not for now~~
|
||||
- [ ] ~~Maybe make some pull requests, if I'm doing something better? => doesn't seem like it~~
|
||||
- [ ] ~~Write man files?~~
|
||||
|
||||
<p>
|
||||
<section id='isso-thread'>
|
||||
<noscript>javascript needs to be activated to view comments.</noscript>
|
||||
</section>
|
||||
</p>
|
||||
|
||||
|
|
|
@ -5,9 +5,7 @@ Brief thoughts on CEA's stewardship of the EA Forum
|
|||
|
||||
<p><em>tl;dr</em>: Once, the EA forum was a lean, mean machine. But it has become more bloated over time, and I don’t like it. Separately, I don’t think it’s worth the roughly $2M/year<sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup> it costs, although I haven’t modelled this in depth.</p>
|
||||
|
||||
<h3>The EA forum frontpage through time.</h3>
|
||||
|
||||
<p>In <a href="https://web.archive.org/web/20181115134712/https://forum.effectivealtruism.org/">2018-2019</a>, the EA forum was a lean and mean machine:</p>
|
||||
<h3>The EA forum frontpage through time.</h3> <p>In <a href="https://web.archive.org/web/20181115134712/https://forum.effectivealtruism.org/">2018-2019</a>, the EA forum was a lean and mean machine:</p>
|
||||
|
||||
<p><img src="https://images.nunosempere.com/blog/2023/10/02/ea-forum-2018-2019.png" alt="" /></p>
|
||||
|
||||
|
@ -76,7 +74,7 @@ Brief thoughts on CEA's stewardship of the EA Forum
|
|||
<p>If you are a CEA director or middle manager, you might have thought about this more than I have. Still, you might want to:</p>
|
||||
|
||||
<ul>
|
||||
<li>Consider going back to ~1 developer and ~1 content person; save &gt$1M/year of your and your donors' money. My sense is that you are probably going to have to do this anyways, since you will probably not get enough money from donors<sup id="fnref:3"><a href="#fn:3" rel="footnote">3</a></sup>, to continue your current course.<sup id="fnref:4"><a href="#fn:4" rel="footnote">4</a></sup></li>
|
||||
<li>Consider going back to ~1 developer and ~1 content person; save >$1M/year of your and your donors' money. My sense is that you are probably going to have to do this anyways, since you will probably not get enough money from donors<sup id="fnref:3"><a href="#fn:3" rel="footnote">3</a></sup>, to continue your current course.<sup id="fnref:4"><a href="#fn:4" rel="footnote">4</a></sup></li>
|
||||
<li>Consider characterizing the EA forum’s team role to be one of lightly shepharding discussion, not leading it or defining it.</li>
|
||||
<li>Consider reflecting on which incentives led to the creation of a larger EA Forum team. For example, Google has well-known incentives around managers being rewarded for leading larger teams to develop new products, and doesn’t value maintenance, leading to a continuous churn and sunsetting of Google products. Might something similar, though at a lower scale, have happened here?</li>
|
||||
<li>As a distant fourth point, consider opening up authentication mechanisms so that users can make comments and posts using open-source frontends. This was previously doable through the greaterwrong frontend, but is no longer possible. It’s possible that this might not be possible with your current software stack, or be too difficult, though.</li>
|
||||
|
|
183
blog/2023/11/07/hurdles-forecasting-ai/.src/index.md
Normal file
183
blog/2023/11/07/hurdles-forecasting-ai/.src/index.md
Normal file
|
@ -0,0 +1,183 @@
|
|||
### Introduction
|
||||
|
||||
In recent years there have been various attempts at using forecasting to discern the shape of the future development of artificial intelligence, like the [AI progress Metaculus tournament](https://www.metaculus.com/tournament/ai-progress/), the Forecasting Research Institute's [existential risk forecasting tournament/experiment](https://forum.effectivealtruism.org/posts/un42vaZgyX7ch2kaj/announcing-forecasting-existential-risks-evidence-from-a), [Samotsvety forecasts](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts) on the topic of AI progress and dangers, or various questions osn [INFER](https://www.infer-pub.com) on short-term technological progress.
|
||||
|
||||
Here is a list of reasons, written with early input from Misha Yagudin, on why using forecasting to make sense of AI developments can be tricky, as well some casual suggestions of ways forward.
|
||||
|
||||
### Excellent forecasters and Superforecasters™ have an imperfect fit for long-term questions
|
||||
|
||||
Here are some reasons why we might expect longer-term predictions to be more difficult:
|
||||
|
||||
1. No fast feedback loops for long-term questions. You can't get that many predict/check/improve cycles, because questions many years into the future, tautologically, take many years to resolve. There are shortcuts, like this [past-casting](https://www.quantifiedintuitions.org/pastcasting) app, but they are imperfect.
|
||||
2. It's possible that short-term forecasters might acquire habits and intuitions that are good for forecasting short-term events, but bad for forecasting longer-term outcomes. For example, "things will change more slowly than you think" is a good heuristic to acquire for short-term predictions, but might be a bad heuristic for longer-term predictions, in the same sense that "people overestimate what they can do in a week, but underestimate what they can do in ten years". This might be particularly insidious to the extent that forecasters acquire intuitions which they can see are useful, but can't tell where they come from. In general, it seems unclear to what extent short-term forecasting skills would generalize to skill at longer-term predictions.
|
||||
3. "Predict no change" in particular might do well, until it doesn't. Consider a world which has a 2% probability of seeing a worldwide pandemic, or some other large catastrophe. Then on average it will take 50 years for one to occur. But at that point, those predicting a 2% will have a poorer track record compared to those who are predicting a ~0%.
|
||||
4. In general, we have been in a period of comparative technological stagnation, and forecasters might be adapted to that, in the same way that e.g., startups adapted to low interest rates.
|
||||
5. Sub-sampling artifacts within good short-term forecasters are tricky. For example, my forecasting group Samotsvety is relatively bullish on transformative technological change from AI, whereas the Forecasting Research Institute's pick of forecasters for their existential risk survey was more bearish.
|
||||
|
||||
### Forecasting loses value when decontextualized, and current forecasting seems pretty decontextualized
|
||||
|
||||
Forecasting seems more valuable when it is commissioned to inform a specific decision. For instance, suppose that you were thinking of starting a new startup. Then it would be interesting to look at:
|
||||
|
||||
- The base rate of success for startups
|
||||
- The base rate of success for all new businesses
|
||||
- The base rate of success for startups that your friends and wider social circle have started
|
||||
- Your personal rate of success at things in life
|
||||
- The inside view: decomposing the space between now and potential success into steps and giving explicit probabilities to each step
|
||||
- etc.
|
||||
|
||||
With this in mind, you could estimate the distribution of monetary returns to starting a startup, vs e.g., remaining an employee somewhere, and make the decision about what to do next with that estimate as an important factor.
|
||||
|
||||
But our impression is that AI forecasting hasn't been tied to specific decisions like that. Instead, it has tended to ask questions that might contribute to an "holistic understanding" of the field. For example, look at [Metaculus' AI progress tournament](https://www.metaculus.com/tournament/ai-progress/). The first few questions are:
|
||||
|
||||
- [How many Natural Language Processing e-prints will be published on arXiv over the 2021-01-14 to 2030-01-14 period?](https://www.metaculus.com/questions/6299/nlo-e-prints-2021-01-14-to-2030-01-14/)
|
||||
- [What percent will software and information services contribute to US GDP in Q4 of 2030?](https://www.metaculus.com/questions/5958/it-as--of-gdp-in-q4-2030/)
|
||||
- [What will be the average top price performance (in G3D Mark /$) of the best available GPU on the following dates?](https://www.metaculus.com/questions/11241/top-price-performance-of-gpus/)
|
||||
|
||||
My impression is that these questions don't have the immediacy of the previous example about startups failing; they aren't incredibly connected to impending decisions. You could draft questions which are more connected to impending decisions, like asking about whether specific AI safety research agendas would succeed, whether AI safety organizations that were previously funded would be funded again, or about how Open Philanthropy would evaluate its own AI safety grant-making in the future. However, these might be worse qua forecasting questions, or at least less Metaculus-like.
|
||||
|
||||
Overall, my impression is that forecasting questions about AI haven't been tied to specific decisions in a way that would make them incredibly valuable. This is curious, because if we look at the recent intellectual history of forecasting, its original raison d'être was to make US intelligence reports more useful, and those reports were directly tied to decisions. But now forecasts are presented separately. In our experience, it has often been more meaningful for forecasters to look in depth at a topic, and then produce a report which contains predictions, rather than producing predictions alone. But this doesn't happen often.
|
||||
|
||||
### The phenomena of interest are really imprecise
|
||||
|
||||
Misha Yagudin recalls that he knows of at least five different operationalizations of "human-level AGI". "Existential risk" is also ambiguous: does it refer to human extinction? or to losing a large fraction of possible human potential? if so, how is "human potential" specified?
|
||||
|
||||
To deal with this problem, one can:
|
||||
|
||||
- Not spend much time on operationalization, and accept that different forecasters will be talking about slightly different concepts.
|
||||
- Try to specify concepts as precisely as possible, which involves a large amount of effort.
|
||||
|
||||
Neither of those options is great. Although some platforms like Manifold Markets and Polymarket are experimenting with under-specified questions, forecasting seems to work best when working with clear definitions. And the fact that this is expensive to do makes the topic of AI a bit of a bad fit for forecasting.
|
||||
|
||||
CSET had a great report trying to address this difficulty: [Future Indices](https://search.nunosempere.com/search?q=Future%20Indices). By having a few somewhat overlapping questions on a topic, e.g., a few distinct operationalizations of AGI, or a few proxies that capture different aspects of a domain of interest, we can have a summary index that better captures the fuzzy concept that we are trying to reason about than any one imperfect question.
|
||||
|
||||
That approach does make dealing with imprecise phenomena easier. But it increases costs, and a bundle of very similar questions can sometimes be dull to forecast on. It also doesn't solve this problem completely—some concepts, like "disempowering humanity", still remain very ambiguous.
|
||||
|
||||
Here are some high-level examples for which operationalization might still be a concern:
|
||||
|
||||
- You might want to ask about whether "AI will go well". The answer depends whether you compare this against "humanity's maximum potential" or with human extinction.
|
||||
- You might want to ask whether any AI startup will "have powers akin to that of a world government".
|
||||
- You might want to ask about whether measures taken by AI labs are "competent".
|
||||
- You might want to ask about whether some AI system is "human-level", and find that there are wildly different operationalizations available for this
|
||||
|
||||
Here are some lower-level but more specific examples:
|
||||
|
||||
- Asking about FLOPs/$ seems like a tempting abstraction at first, because then you can estimate the FLOPs if the largest experiment is willing to spend $100M, $1B, $10B, etc. However, the abstraction ends up breaking down a bit when you look at specifics.
|
||||
- Dollars are unspecified: For example, consider a group like [Inflection](https://www.reuters.com/technology/inflection-ai-raises-13-bln-funding-microsoft-others-2023-06-29/), which raises $1B from NVIDIA and Microsoft, and pays NVIDIA and Microsoft $1B to buy the chips and build the datacenters. Then the FLOPs/$ is very under-defined. OpenAI's deal with Microsoft also makes their FLOPS/$ ambiguous. If China becomes involved, their ability to restrict emigration and the pre-eminent role of their government in the economy also makes FLOPs/$ ambiguous.
|
||||
- FLOPS are under-specified. Do you mean 64-bit precision bits? 16-bit precision? 8-bit precision? Do you count a [multiply-accumulate](https://wikiless.nunosempere.com/wiki/Multiply%E2%80%93accumulate_operation?lang=en) operation as one FLOP or two FLOPs?
|
||||
- Asking about what percentage of labor is automated gets tricky when, instead of automating exactly past labor, you automatize a complement. For example, instead of automatizing a restaurant as is, you design the menu and experience that is most amenable to being automated. Portable music devices don't automate concert halls, they provide a different experience. These differences matter when asking short-term resolvable questions about automation.
|
||||
- You might have some notion of a "leading lab". But operationalizing this is tricky, and simply enumerating current "leading labs" risks them being sidelined by an upstart, or that list not including important Chinese labs, etc. In our case, we have operationalized "leading lab" as "a lab that has performed a training run within 2 OOM of the largest ever at the time of the training run, within the last 2 years", which leans on the inclusive side, but requires keeping good data of what the largest training data is at each point in time, like [here](https://epochai.org/research/ml-trends), which might not be available in the future.
|
||||
|
||||
### Many questions don't resolve until it's already too late
|
||||
|
||||
Some of the questions we are most interested in, like "will AI permanently disempower humanity", "will there be a catastrophe caused by an AI system that kills >5%, or >95% of the human population", or "over the long-term, will humanity manage to harness AI to bring forth a flourishing future & achieve humanity's potential?" don't resolve until it's already too late.
|
||||
|
||||
This adds complications, because:
|
||||
|
||||
- Using short-term proxies rather than long-term outcomes brings its own problems
|
||||
- Question resolution after transformative AI poses incentive problems. E.g., the answer incentivized by "will we get unimaginable wealth?" is "no", because if we do get unimaginable wealth, the reward is worth less.
|
||||
- You may have ["prevention paradox"](https://en.wikipedia.org/wiki/Prevention_paradox) and fixed-point problems, where asking a probability reveals that some risk is high, after which you take measures to reduce that risk. You could have asked about the probability conditional on taking no measures, but then you can't resolve the forecasting question.
|
||||
- You can chain forecasts, e.g., ask "what will [another group] predict that the probability of [some future outcome] is, in one year". But this adds layers of indirection and increases operational burdens.
|
||||
|
||||
Another way to frame this is that some stances about how the future of AI will go are unfalsifiable until a hypothesized treacherous turn in which humanity dies, but otherwise don't have strong enough views on short-term developments that they are willing to bet on short-term events. That seems to be the takeaway from the [late 2021 MIRI conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq), which didn't result in a string of $100k bets. While this is a disappointing position to be in, not sure that forecasting can do much here beyond pointing it out.
|
||||
|
||||
### More dataset gathering is needed
|
||||
|
||||
A pillar of Tetlock-style forecasting is looking at historical frequencies and extrapolating trends. For the topic of AI, it might be interesting to do some systematic data gathering, in the style of Our World In Data-type work, on measures like:
|
||||
|
||||
- Algorithmic improvement for [chess/image classification/weather prediction/...]: how much compute do you need for equivalent performance? what performance can you get for equivalent compute?
|
||||
- Price of FLOPs
|
||||
- Size of models
|
||||
- Valuation of AI companies, number of AI companies through time
|
||||
- Number of organizations which have trained a model within 1, 2 OOM of the largest model
|
||||
- Performance on various capability benchmarks
|
||||
- Very noisy proxies: Machine learning papers uploaded to arXiv, mentions in political speeches, mentions in American legislation, Google n-gram frequency, mentions in major newspaper headlines, patents, number of PhD students, number of Sino-American collaborations, etc.
|
||||
- Answers to AI Impacts' survey of ML researchers through time
|
||||
- Funding directed to AI safety through time
|
||||
|
||||
Note that datasets for some of these exist, but systematic data collection and presentation in the style of [Our World In Data](https://ourworldindata.org/) would greatly simplify creating forecasting pipelines about these questions, and also produce an additional tool for figuring out "what is going on" at a high level with AI. As an example, there is a difference between "Katja Grace polls ML researchers every few years", and "there are pipelines in place to make sure that that survey happens regularly, and forecasting questions are automatically created five years in advance and included in forecasting tournaments with well-known rewards". [Epoch](https://epochai.org/) is doing some good work in this domain.
|
||||
|
||||
### Forecasting AI hits the limits of Bayesianism in general
|
||||
|
||||
One could answer worries about Tetlock-style forecasting by saying: sure, that particular brand of forecasting isn't known to work on long-term predictions. But we have good theoretical reasons to think that Bayesianism is a good model of a perfect reasoner: see for example the review of [Cox's theorem](https://en.wikipedia.org/wiki/Cox%27s_theorem) in the first few chapters of [Probability Theory. The Logic of Science](https://annas-archive.org/md5/ddec0cf1982afa288d61db3e1f7d9323). So the thing that we should be doing is some version of subjective Bayesianism: keeping track of evidence and expressing and sharpening our beliefs with further evidence. See [here](https://nunosempere.com/blog/2022/08/31/on-cox-s-theorem-and-probabilistic-induction/) for a blog post making this argument in more length, though still informally.
|
||||
|
||||
But Bayesianism is a good model of a perfect reasoner with *infinite compute* and *infinite memory*, and in particular access to a bag of hypotheses which contains the true hypothesis. However, humans don't have infinite compute, and sometimes don't have the correct hypothesis in mind. [Knightian uncertainty](https://en.wikipedia.org/wiki/Knightian_uncertainty) and [Kuhnian revolutions](https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions)[^kuhn], [Black swans](https://en.wikipedia.org/wiki/Black_swan_theory) or [ambiguity aversion](https://en.wikipedia.org/wiki/Ambiguity_aversion) can be understood as consequences of normally being able to get around being approximately Bayesian, but sometimes getting bitten by that approximation being bounded and limited.
|
||||
|
||||
[^kuhn]: To spell this out more clearly, Kuhn was looking at the structure of scientific revolutions, and he notices that you have these "paradigm changes" every now in a while. As a naïve Bayesian, those paradigm changes are kinda confusing, and shouldn't have any special status. You should just have hypotheses, and they should just rise and fall in likelihood according to Bayes rule. But as a Bayesian who knows he has finite compute/memory, you can think of Kuhnian revolutions as encountering a true hypothesis which was outside your previous hypothesis space, and having to recalculate. On this topic, see [Just-in-time Bayesianism](https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/) or [A computable version of Solomonoff induction](https://nunosempere.com/blog/2023/03/01/computable-solomonoff/).
|
||||
|
||||
So there are some situations where we can get along by being approximately Bayesian, like coin flips and blackjack tables, domains where we pull our hairs and accept that we don't have infinite compute, like maybe some turbulent and chaotic physical systems or trying to predict dreams. Then we have some domains in which our ability to predict is meaningfully improving with time, like for example weather forecasts, where we can throw supercomputers and PhD students at it, because we care.
|
||||
|
||||
Now the question is where AI in particular falls within that spectrum. Personally, I suspect that it is a domain in which we are likely to not have the correct hypothesis in our prior set of hypotheses. For example, observers in general, but also the [Machine Intelligence Research Institute](https://intelligence.org/) in particular, failed to predict the rise of LLMs and to orient their efforts into making such systems safer, or into preventing such systems from coming into existence. I think this tweet, though maybe meant to be hurtful, is also informative about how tricky of a domain predicting AI progress is:
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">eliezer has IMO done more to accelerate AGI than anyone else.<br><br>certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc.</p>— Sam Altman (@sama) <a href="https://twitter.com/sama/status/1621621724507938816?ref_src=twsrc%5Etfw">February 3, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
However, consider the following caveat: imagine that instead of being interested in AI progress, we were interested in social science, and concerned that they couldn't arrive at the correct conclusion in cases where it was Republican-flavored. Then, one could notice that moving from p-values to likelihood ratios and Bayesian calculations wouldn't particularly help, since Bayesianism doesn't work unless your prior assigns a sufficiently high prior probability to the correct hypothesis. In this case, I think one easy mistake to make might be to just shrug and keep using p-values.
|
||||
|
||||
Similarly, for AI progress, one could notice that there is this subtle critique of forecasting and Bayesianism, and move to using, I don't know, scenario planning, which arguendo could be even worse, assume even more strongly that you know the shape of events to come, or not provide mechanisms for noticing that none of your hypotheses are worth much. I think that would be a mistake.
|
||||
|
||||
### Forecasting also has a bunch of other limitations as a genre
|
||||
|
||||
You can see forecasting as a type of genre. In it, someone writes a forecasting question, that question is deemed sufficiently robust, and then forecasters produce probabilities on it. As a genre, it has some limitations. For instance, when curious about a topic, not all roads lead to forecasting questions, and working in a project such that you *have* to produce forecasting questions could be oddly limited.
|
||||
|
||||
The conventions of the forecasting genre also dictate that forecasters will spend a fairly short amount of time researching before making a prediction. Partly this is a result of, for example, the scoring rule in Metaculus, which incentivized forecasting on many questions. Partly this is because forecasting platforms don't generally pay their forecasters, and even those that are [well funded](https://www.openphilanthropy.org/grants/applied-research-laboratory-for-intelligence-and-security-forecasting-platforms/) pay their forecasters badly, which leads to forecating being a hobby, rather than a full-time occupation. If one thinks that some questions require one to dig deep, and that one will otherwise easily produce shitty forecasts, this might be a particularly worrying feature of the genre.
|
||||
|
||||
Perhaps also as a result of its unprofitability, the forecasting community has also tended to see a large amount of churn, as hobbyist forecasters rise up in their regular careers and it becomes more expensive for them in terms of income lost to forecast on online platforms. You also see this churn in terms of employees of these forecasting platforms, where maybe someone creates some new project—e.g., Replication Markets, Metaculus' AI Progress Tournament, Ought's Elicit, etc.—but then that project dies as its principal person moves on to other topics.
|
||||
|
||||
Forecasting also makes use of scoring rules, which aim to reward forecasters such that they will be incentivized to input their true probabilities. Sadly, these often have the effect of incentivizing people to not collaborate and share information. This can be fixed by using more capital-intensive scoring rules that incentivize collaboration, like [these ones](https://github.com/SamotsvetyForecasting/optimal-scoring) or by grouping forecasters into teams such that they will be incentivized to share information within a team.
|
||||
|
||||
### As an aside, here is a casual review of the track record of long-term predictions
|
||||
|
||||
If we review the track record of superforecasters on longer term questions, we find that... there isn't that much evidence here—remember that the [ACE program](https://wikiless.nunosempere.com/wiki/Aggregative_Contingent_Estimation_Program?lang=en) started in 2010. In *Superforecasting* (2015), Tetlock wrote:
|
||||
|
||||
> Taleb, Kahneman, and I agree there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious—“there will be conflicts”—and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out. And yet, this sort of forecasting is common, even within institutions that should know better.
|
||||
|
||||
However, in p. 33 of [Long-Range Subjective-Probability Forecasts of Slow-Motion Variables in World Politics: Exploring Limits on Expert Judgment](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4377599) (2023), we see that the experts predicting "slow-motion variables" 25 years into the future attain a Brier score of 0.07, which isn't terrible.
|
||||
|
||||
Karnofsky, the erstwhile head-honcho of Open Philanthropy, [spins](https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/) some research by Arb and others as saying that the track record of futurists is "fine". [Here](https://danluu.com/futurist-predictions/) is a more thorough post by Dan Luu which concludes that:
|
||||
|
||||
> ...people who were into "big ideas" who use a few big hammers on every prediction combined with a cocktail party idea level of understanding of the particular subject to explain why a prediction about the subject would fall to the big hammer generally fared poorly, whether or not their favored big ideas were correct. Some examples of "big ideas" would be "environmental doomsday is coming and hyperconservation will pervade everything", "economic growth will create near-infinite wealth (soon)", "Moore's law is supremely important", "quantum mechanics is supremely important", etc. Another common trait of poor predictors is lack of anything resembling serious evaluation of past predictive errors, making improving their intuition or methods impossible (unless they do so in secret). Instead, poor predictors often pick a few predictions that were accurate or at least vaguely sounded similar to an accurate prediction and use those to sell their next generation of predictions to others.
|
||||
>
|
||||
> By contrast, people who had (relatively) accurate predictions had a deep understanding of the problem and also tended to have a record of learning lessons from past predictive errors. Due to the differences in the data sets between this post and Tetlock's work, the details are quite different here. The predictors that I found to be relatively accurate had deep domain knowledge and, implicitly, had access to a huge amount of information that they filtered effectively in order to make good predictions. Tetlock was studying people who made predictions about a wide variety of areas that were, in general, outside of their areas of expertise, so what Tetlock found was that people really dug into the data and deeply understood the limitations of the data, which allowed them to make relatively accurate predictions. But, although the details of how people operated are different, at a high-level, the approach of really digging into specific knowledge was the same.
|
||||
|
||||
### In comparison with other mechanisms for making sense of future AI developments, forecasting does OK.
|
||||
|
||||
Here are some mechanisms that the Effective Altruism community has historically used to try to make sense of possible dangers stemming from future AI developments:
|
||||
|
||||
- Books, like Bostrom's *Superintelligence*, which focused on the abstract properties of highly intelligent and capable agents in the limit.
|
||||
- [Reports](https://www.openphilanthropy.org/research/?q=&focus-area%5B%5D=potential-risks-advanced-ai&content-type%5B%5D=research-reports) by Open Philanthropy. They either try to model AI progress in some detail, like [example 1](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines), or look at priors on technological development, like [example 2](https://www.openphilanthropy.org/research/semi-informative-priors-over-ai-timelines/).
|
||||
- Mini think tanks, like Rethink Priorities, Epoch or AI impacts, which produce their own research and reports.
|
||||
- Larger think tanks, like CSET, which produce reports like [this one](https://cset.georgetown.edu/publication/future-indices/) on Future Indices.
|
||||
- Online discussion on lesswrong.com, that typically assumes things like: intelligence gains would be fast and explosive, that we should aim to design a mathematical construction that guarantees safety, that iteration would not be advisable in the face of fast intelligence gains, etc.
|
||||
- Occasionally, theoretical or mathematical arguments or models of risk.
|
||||
- One-off projects, like Drexler's [Comprehensive AI systems](https://www.fhi.ox.ac.uk/reframing/)
|
||||
- Questions on forecasting platforms, like Metaculus, that try to solidly operationalize possible AI developments and dangers, and ask their forecasters to anticipate when and whether they will happen.
|
||||
- Writeups from forecasting groups, like [Samotsvety](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts)
|
||||
- More recently, the Forecasting Research Institute's [existential risk tournament/experiment writeup](https://forecastingresearch.org/xpt), which has tried to translate geopolitical forecasting mechanisms to predicting AI progress, with mixed success.
|
||||
- Deferring to intellectuals, ideologues, and cheerleaders, like Toby Ord, Yudkowsky or MacAskill.
|
||||
|
||||
None of these options, as they currently exist, seem great. Forecasting has the hurdles discussed above, but maybe other mechanisms have even worse downsides, particularly the more pundit-like ones. Conversely, forecasting will be worse than deferring to a brilliant theoretical mind that is able to grasp the dynamics and subtleties of future AI development, like perhaps Drexler's on a good day.
|
||||
|
||||
Anyways, you might think that this forecasting thing shows potential. Were you a billionnaire, money would not be a limitation for you, so...
|
||||
|
||||
### In this situation, here are some strategies of which you might avail yourself
|
||||
|
||||
#### A. Accept the Faustian bargain
|
||||
|
||||
1. Make a bunch of short-term and long-term forecasting questions on AI progress
|
||||
2. Wait for the short-term forecasting questions to resolve
|
||||
3. Weight the forecasts for the long-term questions according to accuracy in the short term questions
|
||||
|
||||
This is a Faustian bargain because of the reasons reviewed above, chiefly that short-term forecasting performance is not a guarantee of longer term forecasting performance. A cheap version of this would be to look at the best short-term forecasters on the AI categories on Metaculus, and report their probabilities on a few AI and existential risk questions, which would be more interpretable than the current opaque "Metaculus prediction".
|
||||
|
||||
If you think that your other methods of making sense of what it's going on are sufficiently bad, you could choose this and hope for the best? Or, conversely, you could anchor your beliefs on a weighted aggregate of the best short-term forecasters and the most convincing theoretical views. Maybe things will be fine?
|
||||
|
||||
#### B. Attempt to do a Bayesianism
|
||||
|
||||
Go to the effort of rigorously formulating hypotheses, then keep track of incoming evidence for each hypothesis. If a new hypothesis comes in, try to do some version of [just-in-time Bayesianism](https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/), i.e., monkey-patch it after the fact. Once you are specifying your beliefs numerically, you can deploy some cute incentive mechanisms and [reward people who change your mind](https://github.com/SamotsvetyForecasting/optimal-scoring/blob/master/3-amplify-bayesian/amplify-bayesian.pdf).
|
||||
|
||||
Hope that keeping track of hypotheses about the development of AI at least gives you some discipline, and enables you to shed untrue hypotheses or frames a bit earlier than you otherwise would have. Have the discipline to translate the worldviews of various pundits into specific probabilities[^tetlock], and listen to them less when their predictions fail to come. And hope that going to the trouble of doing things that way allows you to anticipate stuff 6 months to 2 years sooner than you would have otherwise, and that it is worth the cost.
|
||||
|
||||
[^tetlock]: Back in the day, Tetlock received a [grant](https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlock-on-forecasting/#2-about-the-grant) to "systematically convert vague predictions made by prominent pundits into explicit numerical forecasts", but I haven't been able to track what happened to it, and I suspect it never happened.
|
||||
|
||||
#### C. Invest in better prediction pipelines as a whole
|
||||
|
||||
Try to build up some more speculative and [formidable](https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/) type of forecasting that can deal with the hurdles above. Be more explicit about the types of decisions that you want better foresight for, realize that you don't have the tools you need, and build someone up to be that for you.
|
|
@ -0,0 +1,5 @@
|
|||
<p>
|
||||
<section id='isso-thread'>
|
||||
<noscript>javascript needs to be activated to view comments.</noscript>
|
||||
</section>
|
||||
</p>
|
5
blog/2023/11/07/hurdles-forecasting-ai/.src/makefile
Normal file
5
blog/2023/11/07/hurdles-forecasting-ai/.src/makefile
Normal file
|
@ -0,0 +1,5 @@
|
|||
MARKDOWN=/usr/bin/markdown -f fencedcode -f ext -f footnote -f latex
|
||||
build:
|
||||
$(MARKDOWN) index.md > temp
|
||||
cat title.md temp isso-snippet.txt > ../index.md
|
||||
rm temp
|
3
blog/2023/11/07/hurdles-forecasting-ai/.src/title.md
Normal file
3
blog/2023/11/07/hurdles-forecasting-ai/.src/title.md
Normal file
|
@ -0,0 +1,3 @@
|
|||
Hurdles of using forecasting as a tool for making sense of AI progress
|
||||
======================================================================
|
||||
|
234
blog/2023/11/07/hurdles-forecasting-ai/index.md
Normal file
234
blog/2023/11/07/hurdles-forecasting-ai/index.md
Normal file
|
@ -0,0 +1,234 @@
|
|||
Hurdles of using forecasting as a tool for making sense of AI progress
|
||||
======================================================================
|
||||
|
||||
<h3>Introduction</h3>
|
||||
|
||||
<p>In recent years there have been various attempts at using forecasting to discern the shape of the future development of artificial intelligence, like the <a href="https://www.metaculus.com/tournament/ai-progress/">AI progress Metaculus tournament</a>, the Forecasting Research Institute’s <a href="https://forum.effectivealtruism.org/posts/un42vaZgyX7ch2kaj/announcing-forecasting-existential-risks-evidence-from-a">existential risk forecasting tournament/experiment</a>, <a href="https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts">Samotsvety forecasts</a> on the topic of AI progress and dangers, or various questions osn <a href="https://www.infer-pub.com">INFER</a> on short-term technological progress.</p>
|
||||
|
||||
<p>Here is a list of reasons, written with early input from Misha Yagudin, on why using forecasting to make sense of AI developments can be tricky, as well some casual suggestions of ways forward.</p>
|
||||
|
||||
<h3>Excellent forecasters and Superforecasters™ have an imperfect fit for long-term questions</h3>
|
||||
|
||||
<p>Here are some reasons why we might expect longer-term predictions to be more difficult:</p>
|
||||
|
||||
<ol>
|
||||
<li>No fast feedback loops for long-term questions. You can’t get that many predict/check/improve cycles, because questions many years into the future, tautologically, take many years to resolve. There are shortcuts, like this <a href="https://www.quantifiedintuitions.org/pastcasting">past-casting</a> app, but they are imperfect.</li>
|
||||
<li>It’s possible that short-term forecasters might acquire habits and intuitions that are good for forecasting short-term events, but bad for forecasting longer-term outcomes. For example, “things will change more slowly than you think” is a good heuristic to acquire for short-term predictions, but might be a bad heuristic for longer-term predictions, in the same sense that “people overestimate what they can do in a week, but underestimate what they can do in ten years”. This might be particularly insidious to the extent that forecasters acquire intuitions which they can see are useful, but can’t tell where they come from. In general, it seems unclear to what extent short-term forecasting skills would generalize to skill at longer-term predictions.</li>
|
||||
<li>“Predict no change” in particular might do well, until it doesn’t. Consider a world which has a 2% probability of seeing a worldwide pandemic, or some other large catastrophe. Then on average it will take 50 years for one to occur. But at that point, those predicting a 2% will have a poorer track record compared to those who are predicting a ~0%.</li>
|
||||
<li>In general, we have been in a period of comparative technological stagnation, and forecasters might be adapted to that, in the same way that e.g., startups adapted to low interest rates.</li>
|
||||
<li>Sub-sampling artifacts within good short-term forecasters are tricky. For example, my forecasting group Samotsvety is relatively bullish on transformative technological change from AI, whereas the Forecasting Research Institute’s pick of forecasters for their existential risk survey was more bearish.</li>
|
||||
</ol>
|
||||
|
||||
|
||||
<h3>Forecasting loses value when decontextualized, and current forecasting seems pretty decontextualized</h3>
|
||||
|
||||
<p>Forecasting seems more valuable when it is commissioned to inform a specific decision. For instance, suppose that you were thinking of starting a new startup. Then it would be interesting to look at:</p>
|
||||
|
||||
<ul>
|
||||
<li>The base rate of success for startups</li>
|
||||
<li>The base rate of success for all new businesses</li>
|
||||
<li>The base rate of success for startups that your friends and wider social circle have started</li>
|
||||
<li>Your personal rate of success at things in life</li>
|
||||
<li>The inside view: decomposing the space between now and potential success into steps and giving explicit probabilities to each step</li>
|
||||
<li>etc.</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<p>With this in mind, you could estimate the distribution of monetary returns to starting a startup, vs e.g., remaining an employee somewhere, and make the decision about what to do next with that estimate as an important factor.</p>
|
||||
|
||||
<p>But our impression is that AI forecasting hasn’t been tied to specific decisions like that. Instead, it has tended to ask questions that might contribute to an “holistic understanding” of the field. For example, look at <a href="https://www.metaculus.com/tournament/ai-progress/">Metaculus' AI progress tournament</a>. The first few questions are:</p>
|
||||
|
||||
<ul>
|
||||
<li><a href="https://www.metaculus.com/questions/6299/nlo-e-prints-2021-01-14-to-2030-01-14/">How many Natural Language Processing e-prints will be published on arXiv over the 2021-01-14 to 2030-01-14 period?</a></li>
|
||||
<li><a href="https://www.metaculus.com/questions/5958/it-as--of-gdp-in-q4-2030/">What percent will software and information services contribute to US GDP in Q4 of 2030?</a></li>
|
||||
<li><a href="https://www.metaculus.com/questions/11241/top-price-performance-of-gpus/">What will be the average top price performance (in G3D Mark /$) of the best available GPU on the following dates?</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
<p>My impression is that these questions don’t have the immediacy of the previous example about startups failing; they aren’t incredibly connected to impending decisions. You could draft questions which are more connected to impending decisions, like asking about whether specific AI safety research agendas would succeed, whether AI safety organizations that were previously funded would be funded again, or about how Open Philanthropy would evaluate its own AI safety grant-making in the future. However, these might be worse qua forecasting questions, or at least less Metaculus-like.</p>
|
||||
|
||||
<p>Overall, my impression is that forecasting questions about AI haven’t been tied to specific decisions in a way that would make them incredibly valuable. This is curious, because if we look at the recent intellectual history of forecasting, its original raison d'être was to make US intelligence reports more useful, and those reports were directly tied to decisions. But now forecasts are presented separately. In our experience, it has often been more meaningful for forecasters to look in depth at a topic, and then produce a report which contains predictions, rather than producing predictions alone. But this doesn’t happen often.</p>
|
||||
|
||||
<h3>The phenomena of interest are really imprecise</h3>
|
||||
|
||||
<p>Misha Yagudin recalls that he knows of at least five different operationalizations of “human-level AGI”. “Existential risk” is also ambiguous: does it refer to human extinction? or to losing a large fraction of possible human potential? if so, how is “human potential” specified?</p>
|
||||
|
||||
<p>To deal with this problem, one can:</p>
|
||||
|
||||
<ul>
|
||||
<li>Not spend much time on operationalization, and accept that different forecasters will be talking about slightly different concepts.</li>
|
||||
<li>Try to specify concepts as precisely as possible, which involves a large amount of effort.</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<p>Neither of those options is great. Although some platforms like Manifold Markets and Polymarket are experimenting with under-specified questions, forecasting seems to work best when working with clear definitions. And the fact that this is expensive to do makes the topic of AI a bit of a bad fit for forecasting.</p>
|
||||
|
||||
<p>CSET had a great report trying to address this difficulty: <a href="https://search.nunosempere.com/search?q=Future%20Indices">Future Indices</a>. By having a few somewhat overlapping questions on a topic, e.g., a few distinct operationalizations of AGI, or a few proxies that capture different aspects of a domain of interest, we can have a summary index that better captures the fuzzy concept that we are trying to reason about than any one imperfect question.</p>
|
||||
|
||||
<p>That approach does make dealing with imprecise phenomena easier. But it increases costs, and a bundle of very similar questions can sometimes be dull to forecast on. It also doesn’t solve this problem completely—some concepts, like “disempowering humanity”, still remain very ambiguous.</p>
|
||||
|
||||
<p>Here are some high-level examples for which operationalization might still be a concern:</p>
|
||||
|
||||
<ul>
|
||||
<li>You might want to ask about whether “AI will go well”. The answer depends whether you compare this against “humanity’s maximum potential” or with human extinction.</li>
|
||||
<li>You might want to ask whether any AI startup will “have powers akin to that of a world government”.</li>
|
||||
<li>You might want to ask about whether measures taken by AI labs are “competent”.</li>
|
||||
<li>You might want to ask about whether some AI system is “human-level”, and find that there are wildly different operationalizations available for this</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<p>Here are some lower-level but more specific examples:</p>
|
||||
|
||||
<ul>
|
||||
<li>Asking about FLOPs/$ seems like a tempting abstraction at first, because then you can estimate the FLOPs if the largest experiment is willing to spend $100M, $1B, $10B, etc. However, the abstraction ends up breaking down a bit when you look at specifics.
|
||||
|
||||
<ul>
|
||||
<li>Dollars are unspecified: For example, consider a group like <a href="https://www.reuters.com/technology/inflection-ai-raises-13-bln-funding-microsoft-others-2023-06-29/">Inflection</a>, which raises $1B from NVIDIA and Microsoft, and pays NVIDIA and Microsoft $1B to buy the chips and build the datacenters. Then the FLOPs/$ is very under-defined. OpenAI’s deal with Microsoft also makes their FLOPS/$ ambiguous. If China becomes involved, their ability to restrict emigration and the pre-eminent role of their government in the economy also makes FLOPs/$ ambiguous.</li>
|
||||
<li>FLOPS are under-specified. Do you mean 64-bit precision bits? 16-bit precision? 8-bit precision? Do you count a <a href="https://wikiless.nunosempere.com/wiki/Multiply%E2%80%93accumulate_operation?lang=en">multiply-accumulate</a> operation as one FLOP or two FLOPs?</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>Asking about what percentage of labor is automated gets tricky when, instead of automating exactly past labor, you automatize a complement. For example, instead of automatizing a restaurant as is, you design the menu and experience that is most amenable to being automated. Portable music devices don’t automate concert halls, they provide a different experience. These differences matter when asking short-term resolvable questions about automation.</li>
|
||||
<li>You might have some notion of a “leading lab”. But operationalizing this is tricky, and simply enumerating current “leading labs” risks them being sidelined by an upstart, or that list not including important Chinese labs, etc. In our case, we have operationalized “leading lab” as “a lab that has performed a training run within 2 OOM of the largest ever at the time of the training run, within the last 2 years”, which leans on the inclusive side, but requires keeping good data of what the largest training data is at each point in time, like <a href="https://epochai.org/research/ml-trends">here</a>, which might not be available in the future.</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<h3>Many questions don’t resolve until it’s already too late</h3>
|
||||
|
||||
<p>Some of the questions we are most interested in, like “will AI permanently disempower humanity”, “will there be a catastrophe caused by an AI system that kills >5%, or >95% of the human population”, or “over the long-term, will humanity manage to harness AI to bring forth a flourishing future & achieve humanity’s potential?” don’t resolve until it’s already too late.</p>
|
||||
|
||||
<p>This adds complications, because:</p>
|
||||
|
||||
<ul>
|
||||
<li>Using short-term proxies rather than long-term outcomes brings its own problems</li>
|
||||
<li>Question resolution after transformative AI poses incentive problems. E.g., the answer incentivized by “will we get unimaginable wealth?” is “no”, because if we do get unimaginable wealth, the reward is worth less.</li>
|
||||
<li>You may have <a href="https://en.wikipedia.org/wiki/Prevention_paradox">“prevention paradox”</a> and fixed-point problems, where asking a probability reveals that some risk is high, after which you take measures to reduce that risk. You could have asked about the probability conditional on taking no measures, but then you can’t resolve the forecasting question.</li>
|
||||
<li>You can chain forecasts, e.g., ask “what will [another group] predict that the probability of [some future outcome] is, in one year”. But this adds layers of indirection and increases operational burdens.</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<p>Another way to frame this is that some stances about how the future of AI will go are unfalsifiable until a hypothesized treacherous turn in which humanity dies, but otherwise don’t have strong enough views on short-term developments that they are willing to bet on short-term events. That seems to be the takeaway from the <a href="https://www.lesswrong.com/s/n945eovrA3oDueqtq">late 2021 MIRI conversations</a>, which didn’t result in a string of $100k bets. While this is a disappointing position to be in, not sure that forecasting can do much here beyond pointing it out.</p>
|
||||
|
||||
<h3>More dataset gathering is needed</h3>
|
||||
|
||||
<p>A pillar of Tetlock-style forecasting is looking at historical frequencies and extrapolating trends. For the topic of AI, it might be interesting to do some systematic data gathering, in the style of Our World In Data-type work, on measures like:</p>
|
||||
|
||||
<ul>
|
||||
<li>Algorithmic improvement for [chess/image classification/weather prediction/…]: how much compute do you need for equivalent performance? what performance can you get for equivalent compute?</li>
|
||||
<li>Price of FLOPs</li>
|
||||
<li>Size of models</li>
|
||||
<li>Valuation of AI companies, number of AI companies through time</li>
|
||||
<li>Number of organizations which have trained a model within 1, 2 OOM of the largest model</li>
|
||||
<li>Performance on various capability benchmarks</li>
|
||||
<li>Very noisy proxies: Machine learning papers uploaded to arXiv, mentions in political speeches, mentions in American legislation, Google n-gram frequency, mentions in major newspaper headlines, patents, number of PhD students, number of Sino-American collaborations, etc.</li>
|
||||
<li>Answers to AI Impacts' survey of ML researchers through time</li>
|
||||
<li>Funding directed to AI safety through time</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<p>Note that datasets for some of these exist, but systematic data collection and presentation in the style of <a href="https://ourworldindata.org/">Our World In Data</a> would greatly simplify creating forecasting pipelines about these questions, and also produce an additional tool for figuring out “what is going on” at a high level with AI. As an example, there is a difference between “Katja Grace polls ML researchers every few years”, and “there are pipelines in place to make sure that that survey happens regularly, and forecasting questions are automatically created five years in advance and included in forecasting tournaments with well-known rewards”. <a href="https://epochai.org/">Epoch</a> is doing some good work in this domain.</p>
|
||||
|
||||
<h3>Forecasting AI hits the limits of Bayesianism in general</h3>
|
||||
|
||||
<p>One could answer worries about Tetlock-style forecasting by saying: sure, that particular brand of forecasting isn’t known to work on long-term predictions. But we have good theoretical reasons to think that Bayesianism is a good model of a perfect reasoner: see for example the review of <a href="https://en.wikipedia.org/wiki/Cox%27s_theorem">Cox’s theorem</a> in the first few chapters of <a href="https://annas-archive.org/md5/ddec0cf1982afa288d61db3e1f7d9323">Probability Theory. The Logic of Science</a>. So the thing that we should be doing is some version of subjective Bayesianism: keeping track of evidence and expressing and sharpening our beliefs with further evidence. See <a href="https://nunosempere.com/blog/2022/08/31/on-cox-s-theorem-and-probabilistic-induction/">here</a> for a blog post making this argument in more length, though still informally.</p>
|
||||
|
||||
<p>But Bayesianism is a good model of a perfect reasoner with <em>infinite compute</em> and <em>infinite memory</em>, and in particular access to a bag of hypotheses which contains the true hypothesis. However, humans don’t have infinite compute, and sometimes don’t have the correct hypothesis in mind. <a href="https://en.wikipedia.org/wiki/Knightian_uncertainty">Knightian uncertainty</a> and <a href="https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions">Kuhnian revolutions</a><sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup>, <a href="https://en.wikipedia.org/wiki/Black_swan_theory">Black swans</a> or <a href="https://en.wikipedia.org/wiki/Ambiguity_aversion">ambiguity aversion</a> can be understood as consequences of normally being able to get around being approximately Bayesian, but sometimes getting bitten by that approximation being bounded and limited.</p>
|
||||
|
||||
<p>So there are some situations where we can get along by being approximately Bayesian, like coin flips and blackjack tables, domains where we pull our hairs and accept that we don’t have infinite compute, like maybe some turbulent and chaotic physical systems or trying to predict dreams. Then we have some domains in which our ability to predict is meaningfully improving with time, like for example weather forecasts, where we can throw supercomputers and PhD students at it, because we care.</p>
|
||||
|
||||
<p>Now the question is where AI in particular falls within that spectrum. Personally, I suspect that it is a domain in which we are likely to not have the correct hypothesis in our prior set of hypotheses. For example, observers in general, but also the <a href="https://intelligence.org/">Machine Intelligence Research Institute</a> in particular, failed to predict the rise of LLMs and to orient their efforts into making such systems safer, or into preventing such systems from coming into existence. I think this tweet, though maybe meant to be hurtful, is also informative about how tricky of a domain predicting AI progress is:</p>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">eliezer has IMO done more to accelerate AGI than anyone else.<br><br>certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc.</p>— Sam Altman (@sama) <a href="https://twitter.com/sama/status/1621621724507938816?ref_src=twsrc%5Etfw">February 3, 2023</a></blockquote>
|
||||
|
||||
|
||||
<p> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
|
||||
|
||||
<p>However, consider the following caveat: imagine that instead of being interested in AI progress, we were interested in social science, and concerned that they couldn’t arrive at the correct conclusion in cases where it was Republican-flavored. Then, one could notice that moving from p-values to likelihood ratios and Bayesian calculations wouldn’t particularly help, since Bayesianism doesn’t work unless your prior assigns a sufficiently high prior probability to the correct hypothesis. In this case, I think one easy mistake to make might be to just shrug and keep using p-values.</p>
|
||||
|
||||
<p>Similarly, for AI progress, one could notice that there is this subtle critique of forecasting and Bayesianism, and move to using, I don’t know, scenario planning, which arguendo could be even worse, assume even more strongly that you know the shape of events to come, or not provide mechanisms for noticing that none of your hypotheses are worth much. I think that would be a mistake.</p>
|
||||
|
||||
<h3>Forecasting also has a bunch of other limitations as a genre</h3>
|
||||
|
||||
<p>You can see forecasting as a type of genre. In it, someone writes a forecasting question, that question is deemed sufficiently robust, and then forecasters produce probabilities on it. As a genre, it has some limitations. For instance, when curious about a topic, not all roads lead to forecasting questions, and working in a project such that you <em>have</em> to produce forecasting questions could be oddly limited.</p>
|
||||
|
||||
<p>The conventions of the forecasting genre also dictate that forecasters will spend a fairly short amount of time researching before making a prediction. Partly this is a result of, for example, the scoring rule in Metaculus, which incentivized forecasting on many questions. Partly this is because forecasting platforms don’t generally pay their forecasters, and even those that are <a href="https://www.openphilanthropy.org/grants/applied-research-laboratory-for-intelligence-and-security-forecasting-platforms/">well funded</a> pay their forecasters badly, which leads to forecating being a hobby, rather than a full-time occupation. If one thinks that some questions require one to dig deep, and that one will otherwise easily produce shitty forecasts, this might be a particularly worrying feature of the genre.</p>
|
||||
|
||||
<p>Perhaps also as a result of its unprofitability, the forecasting community has also tended to see a large amount of churn, as hobbyist forecasters rise up in their regular careers and it becomes more expensive for them in terms of income lost to forecast on online platforms. You also see this churn in terms of employees of these forecasting platforms, where maybe someone creates some new project—e.g., Replication Markets, Metaculus' AI Progress Tournament, Ought’s Elicit, etc.—but then that project dies as its principal person moves on to other topics.</p>
|
||||
|
||||
<p>Forecasting also makes use of scoring rules, which aim to reward forecasters such that they will be incentivized to input their true probabilities. Sadly, these often have the effect of incentivizing people to not collaborate and share information. This can be fixed by using more capital-intensive scoring rules that incentivize collaboration, like <a href="https://github.com/SamotsvetyForecasting/optimal-scoring">these ones</a> or by grouping forecasters into teams such that they will be incentivized to share information within a team.</p>
|
||||
|
||||
<h3>As an aside, here is a casual review of the track record of long-term predictions</h3>
|
||||
|
||||
<p>If we review the track record of superforecasters on longer term questions, we find that… there isn’t that much evidence here—remember that the <a href="https://wikiless.nunosempere.com/wiki/Aggregative_Contingent_Estimation_Program?lang=en">ACE program</a> started in 2010. In <em>Superforecasting</em> (2015), Tetlock wrote:</p>
|
||||
|
||||
<blockquote><p>Taleb, Kahneman, and I agree there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious—“there will be conflicts”—and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out. And yet, this sort of forecasting is common, even within institutions that should know better.</p></blockquote>
|
||||
|
||||
<p>However, in p. 33 of <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4377599">Long-Range Subjective-Probability Forecasts of Slow-Motion Variables in World Politics: Exploring Limits on Expert Judgment</a> (2023), we see that the experts predicting “slow-motion variables” 25 years into the future attain a Brier score of 0.07, which isn’t terrible.</p>
|
||||
|
||||
<p>Karnofsky, the erstwhile head-honcho of Open Philanthropy, <a href="https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/">spins</a> some research by Arb and others as saying that the track record of futurists is “fine”. <a href="https://danluu.com/futurist-predictions/">Here</a> is a more thorough post by Dan Luu which concludes that:</p>
|
||||
|
||||
<blockquote><p>…people who were into “big ideas” who use a few big hammers on every prediction combined with a cocktail party idea level of understanding of the particular subject to explain why a prediction about the subject would fall to the big hammer generally fared poorly, whether or not their favored big ideas were correct. Some examples of “big ideas” would be “environmental doomsday is coming and hyperconservation will pervade everything”, “economic growth will create near-infinite wealth (soon)”, “Moore’s law is supremely important”, “quantum mechanics is supremely important”, etc. Another common trait of poor predictors is lack of anything resembling serious evaluation of past predictive errors, making improving their intuition or methods impossible (unless they do so in secret). Instead, poor predictors often pick a few predictions that were accurate or at least vaguely sounded similar to an accurate prediction and use those to sell their next generation of predictions to others.</p>
|
||||
|
||||
<p>By contrast, people who had (relatively) accurate predictions had a deep understanding of the problem and also tended to have a record of learning lessons from past predictive errors. Due to the differences in the data sets between this post and Tetlock’s work, the details are quite different here. The predictors that I found to be relatively accurate had deep domain knowledge and, implicitly, had access to a huge amount of information that they filtered effectively in order to make good predictions. Tetlock was studying people who made predictions about a wide variety of areas that were, in general, outside of their areas of expertise, so what Tetlock found was that people really dug into the data and deeply understood the limitations of the data, which allowed them to make relatively accurate predictions. But, although the details of how people operated are different, at a high-level, the approach of really digging into specific knowledge was the same.</p></blockquote>
|
||||
|
||||
<h3>In comparison with other mechanisms for making sense of future AI developments, forecasting does OK.</h3>
|
||||
|
||||
<p>Here are some mechanisms that the Effective Altruism community has historically used to try to make sense of possible dangers stemming from future AI developments:</p>
|
||||
|
||||
<ul>
|
||||
<li>Books, like Bostrom’s <em>Superintelligence</em>, which focused on the abstract properties of highly intelligent and capable agents in the limit.</li>
|
||||
<li><a href="https://www.openphilanthropy.org/research/?q=&focus-area%5B%5D=potential-risks-advanced-ai&content-type%5B%5D=research-reports">Reports</a> by Open Philanthropy. They either try to model AI progress in some detail, like <a href="https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines">example 1</a>, or look at priors on technological development, like <a href="https://www.openphilanthropy.org/research/semi-informative-priors-over-ai-timelines/">example 2</a>.</li>
|
||||
<li>Mini think tanks, like Rethink Priorities, Epoch or AI impacts, which produce their own research and reports.</li>
|
||||
<li>Larger think tanks, like CSET, which produce reports like <a href="https://cset.georgetown.edu/publication/future-indices/">this one</a> on Future Indices.</li>
|
||||
<li>Online discussion on lesswrong.com, that typically assumes things like: intelligence gains would be fast and explosive, that we should aim to design a mathematical construction that guarantees safety, that iteration would not be advisable in the face of fast intelligence gains, etc.</li>
|
||||
<li>Occasionally, theoretical or mathematical arguments or models of risk.</li>
|
||||
<li>One-off projects, like Drexler’s <a href="https://www.fhi.ox.ac.uk/reframing/">Comprehensive AI systems</a></li>
|
||||
<li>Questions on forecasting platforms, like Metaculus, that try to solidly operationalize possible AI developments and dangers, and ask their forecasters to anticipate when and whether they will happen.</li>
|
||||
<li>Writeups from forecasting groups, like <a href="https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts">Samotsvety</a></li>
|
||||
<li>More recently, the Forecasting Research Institute’s <a href="https://forecastingresearch.org/xpt">existential risk tournament/experiment writeup</a>, which has tried to translate geopolitical forecasting mechanisms to predicting AI progress, with mixed success.</li>
|
||||
<li>Deferring to intellectuals, ideologues, and cheerleaders, like Toby Ord, Yudkowsky or MacAskill.</li>
|
||||
</ul>
|
||||
|
||||
|
||||
<p>None of these options, as they currently exist, seem great. Forecasting has the hurdles discussed above, but maybe other mechanisms have even worse downsides, particularly the more pundit-like ones. Conversely, forecasting will be worse than deferring to a brilliant theoretical mind that is able to grasp the dynamics and subtleties of future AI development, like perhaps Drexler’s on a good day.</p>
|
||||
|
||||
<p>Anyways, you might think that this forecasting thing shows potential. Were you a billionnaire, money would not be a limitation for you, so…</p>
|
||||
|
||||
<h3>In this situation, here are some strategies of which you might avail yourself</h3>
|
||||
|
||||
<h4>A. Accept the Faustian bargain</h4>
|
||||
|
||||
<ol>
|
||||
<li>Make a bunch of short-term and long-term forecasting questions on AI progress</li>
|
||||
<li>Wait for the short-term forecasting questions to resolve</li>
|
||||
<li>Weight the forecasts for the long-term questions according to accuracy in the short term questions</li>
|
||||
</ol>
|
||||
|
||||
|
||||
<p>This is a Faustian bargain because of the reasons reviewed above, chiefly that short-term forecasting performance is not a guarantee of longer term forecasting performance. A cheap version of this would be to look at the best short-term forecasters on the AI categories on Metaculus, and report their probabilities on a few AI and existential risk questions, which would be more interpretable than the current opaque “Metaculus prediction”.</p>
|
||||
|
||||
<p>If you think that your other methods of making sense of what it’s going on are sufficiently bad, you could choose this and hope for the best? Or, conversely, you could anchor your beliefs on a weighted aggregate of the best short-term forecasters and the most convincing theoretical views. Maybe things will be fine?</p>
|
||||
|
||||
<h4>B. Attempt to do a Bayesianism</h4>
|
||||
|
||||
<p>Go to the effort of rigorously formulating hypotheses, then keep track of incoming evidence for each hypothesis. If a new hypothesis comes in, try to do some version of <a href="https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/">just-in-time Bayesianism</a>, i.e., monkey-patch it after the fact. Once you are specifying your beliefs numerically, you can deploy some cute incentive mechanisms and <a href="https://github.com/SamotsvetyForecasting/optimal-scoring/blob/master/3-amplify-bayesian/amplify-bayesian.pdf">reward people who change your mind</a>.</p>
|
||||
|
||||
<p>Hope that keeping track of hypotheses about the development of AI at least gives you some discipline, and enables you to shed untrue hypotheses or frames a bit earlier than you otherwise would have. Have the discipline to translate the worldviews of various pundits into specific probabilities<sup id="fnref:2"><a href="#fn:2" rel="footnote">2</a></sup>, and listen to them less when their predictions fail to come. And hope that going to the trouble of doing things that way allows you to anticipate stuff 6 months to 2 years sooner than you would have otherwise, and that it is worth the cost.</p>
|
||||
|
||||
<h4>C. Invest in better prediction pipelines as a whole</h4>
|
||||
|
||||
<p>Try to build up some more speculative and <a href="https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/">formidable</a> type of forecasting that can deal with the hurdles above. Be more explicit about the types of decisions that you want better foresight for, realize that you don’t have the tools you need, and build someone up to be that for you.</p>
|
||||
<div class="footnotes">
|
||||
<hr/>
|
||||
<ol>
|
||||
<li id="fn:1">
|
||||
To spell this out more clearly, Kuhn was looking at the structure of scientific revolutions, and he notices that you have these “paradigm changes” every now in a while. As a naïve Bayesian, those paradigm changes are kinda confusing, and shouldn’t have any special status. You should just have hypotheses, and they should just rise and fall in likelihood according to Bayes rule. But as a Bayesian who knows he has finite compute/memory, you can think of Kuhnian revolutions as encountering a true hypothesis which was outside your previous hypothesis space, and having to recalculate. On this topic, see <a href="https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/">Just-in-time Bayesianism</a> or <a href="https://nunosempere.com/blog/2023/03/01/computable-solomonoff/">A computable version of Solomonoff induction</a>.<a href="#fnref:1" rev="footnote">↩</a></li>
|
||||
<li id="fn:2">
|
||||
Back in the day, Tetlock received a <a href="https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlock-on-forecasting/#2-about-the-grant">grant</a> to “systematically convert vague predictions made by prominent pundits into explicit numerical forecasts”, but I haven’t been able to track what happened to it, and I suspect it never happened.<a href="#fnref:2" rev="footnote">↩</a></li>
|
||||
</ol>
|
||||
</div>
|
||||
|
||||
<p>
|
||||
<section id='isso-thread'>
|
||||
<noscript>javascript needs to be activated to view comments.</noscript>
|
||||
</section>
|
||||
</p>
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Shapley Maximizers is a niche estimation, evaluation and impact auditing consultancy run by myself, Nuño Sempere, but increasingly also with support from collaborators. This page presents our core competencies, consulting rates, description of ideal clients, testimonials, and a few further thoughts.
|
||||
|
||||
In short, we are looking to support smallish people and projects who are already producing value, and who want to add clarity and improve their prioritization through estimation, measure and good judgment. You can reach out to to nuno.semperelh@protonmail.com.
|
||||
In short, we are looking to support people and smallish organizations who are already producing value, and who want to add clarity and improve their prioritization through estimation, measure and good judgment. You can reach out to to nuno.semperelh@protonmail.com.
|
||||
|
||||
### Core competencies
|
||||
|
||||
|
@ -52,12 +52,12 @@ On occasion, I've enjoyed talking with entrepreneurial individuals about how the
|
|||
I value getting hired for more hours, because each engagement has some negotiation, preparation and administrative burden. Therefore, I am discounting buying a larger number of hours.
|
||||
|
||||
| Size of project | Cost | ~hours | Example |
|
||||
| --- | --- | --- | --- |
|
||||
| One-off (discounted) | $100 | 1h | You, an early career person, talk with me for an hour about a career decision you are about to make, about a project you want my input on, etc. |
|
||||
| One-off | $500 | 2h | You, a titan of industry, talk with me for one about a project you want my input on, before which I spend a few hours of preparation. Or, you have me organize a two-hour forecasting workshop and a one hour chat with your underlings etc. |
|
||||
| Focused | $2k | 10h | Research that draws on my topics of expertise, where I have already thought about the topic, and just have to write it down. For example, [this Bayesian adjustment to Rethink Priorities](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/) |
|
||||
| In-depth | $15k | 100h | An [evaluation of an organization](https://forum.effectivealtruism.org/posts/kTLR23dFRB5pJryvZ/external-evaluation-of-the-ea-wiki), many [shallow evaluations](https://forum.nunosempere.com/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations), or an early version of [metaforecast](https://metaforecast.org), two editions of the [forecasting newsletter](https://forecasting.substack.com/) |
|
||||
| Custom | reach out | 1000h | Large research project, an ambitious report on a novel topic, the current iteration of [metaforecast](https://metaforecast.org) |
|
||||
| ------- | ---------- | ----- | --- |
|
||||
| One-off (discounted) | $100 | 1h | You, an early career person, talk with me for an hour about a career decision you are about to make, about a project you want my input on, etc. |
|
||||
| Small | $500 | 2h | You, a titan of industry, pick my bran about a project you want my input on, before which I spend a few hours of preparation. Or, you have me organize a two-hour forecasting workshop and a one hour chat with your underlings. |
|
||||
| Focused | $2k | 10h | Research that draws on my topics of expertise, where I have already thought about the topic, and just have to write it down. For example, [this Bayesian adjustment to Rethink Priorities](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/) |
|
||||
| Major | $15k | 100h | An [evaluation of an organization](https://forum.effectivealtruism.org/posts/kTLR23dFRB5pJryvZ/external-evaluation-of-the-ea-wiki), many [shallow evaluations](https://forum.nunosempere.com/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations), or an early version of [metaforecast](https://metaforecast.org), two editions of the [forecasting newsletter](https://forecasting.substack.com/) |
|
||||
| Custom | reach out | 1000h | Large research project, an ambitious report on a novel topic, the current iteration of [metaforecast](https://metaforecast.org) |
|
||||
|
||||
### Description of client
|
||||
|
||||
|
|
Loading…
Reference in New Issue
Block a user