nunosempere.com/blog/2022/10/12/forecasting-newsletter-september-2022/index.md

189 lines
30 KiB
Markdown
Raw Normal View History

2022-10-30 21:21:57 +00:00
Forecasting Newsletter: September 2022.
==============
## Highlights
* PredictIt vs Kalshi vs CFTC saga [continues](https://comments.cftc.gov/Handlers/PdfHandler.ashx?id=34691#?w=sapqmnxoxn)
* Future Fund announces [$1M+ prize](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize#comments) for arguments which shift their probabilities about AI timelines and dangers
* Dan Luu [looks at the track record of futurists](https://danluu.com/futurist-predictions/)
## Index
* Prediction Markets, Forecasting Platforms &co
* PredictIt, Kalshi & the CFTC
* Metaculus
* Manifold Markets
* Squiggle
* Odds and Ends
* Research
* Shortform
* Longform
Browse past newsletters [here](https://forecasting.substack.com/), or view this newsletter on substack [here.](https://forecasting.substack.com/p/forecasting-newsletter-september-57b) If you have a content suggestion or want to reach out, you can leave a comment or find me on [Twitter](https://twitter.com/NunoSempere).
## Prediction Markets and Forecasting Platforms
### PredictIt, Kalshi & the CFTC
2023-07-15 11:54:12 +00:00
<img src="https://images.nunosempere.com/blog/2022/10/12/forecasting-newsletter-september-2022/simpsons.jpg" class='.img-medium-center'>
2022-10-30 21:21:57 +00:00
America, Land of the Free
Previously:
* Kalshi hired a former [CFTC commissioner](https://kalshi.com/blog/former-cftc-commissioner-brian-quintenz-joins-our-board) ([a](https://web.archive.org/web/20220201175613/https://kalshi.com/blog/former-cftc-commissioner-brian-quintenz-joins-our-board)).
* The CFTC [withdrew its no-action letter](https://www.cftc.gov/PressRoom/PressReleases/8567-22) ([a](https://web.archive.org/web/20220805010244/https://www.cftc.gov/PressRoom/PressReleases/8567-22)) from PredictIt
* Kalshi applied to the CFTC for permission to host a market on which party will control the US Congress after the 2022 mid-term elections. The CFTC [asked the public for comments](https://comments.cftc.gov/PublicComments/CommentList.aspx?id=7311) ([a](https://web.archive.org/web/20220828210656/https://comments.cftc.gov/PublicComments/CommentList.aspx?id=7311)) ([secondary source](https://www.politico.com/news/2022/09/05/voters-betting-elections-trading-00054723), ([a](https://web.archive.org/web/20220924141931/https://www.politico.com/news/2022/09/05/voters-betting-elections-trading-00054723))). 
Since then, on September the 9th [PredictIt sued the CFTC](https://www.jdsupra.com/legalnews/unpredictable-future-of-political-1333136/) ([a](http://web.archive.org/web/20220925015149/https://www.jdsupra.com/legalnews/unpredictable-future-of-political-1333136/)). Richard Hanania comments [why he is joining the lawsuit](https://richardhanania.substack.com/p/why-im-suing-the-federal-government) ([a](http://web.archive.org/web/20221001194707/https://richardhanania.substack.com/p/why-im-suing-the-federal-government)). 
Solomon Sia and Pratik Chougule—in collaboration with others like myself—wrote [this extremely thorough letter to the CFTC](https://comments.cftc.gov/Handlers/PdfHandler.ashx?id=34691#?w=sapqmnxoxn) ([a](https://web.archive.org/web/20221012143802/https://comments.cftc.gov/Handlers/PdfHandler.ashx?id=34691#w=sapqmnxoxn)), examining many aspects of the decision. 
There has been [a range of newspaper articles](https://news.google.com/search?q=PredictIt%20CFTC&hl=en-GB&gl=GB&ceid=GB%3Aen) ([a](https://archive.ph/uQEvL)) commenting on the PredictIt spat (e.g., [1](https://www.wsj.com/articles/why-wont-the-cftc-let-you-take-a-position-on-the-election-11582933734), [2](https://slate.com/business/2022/08/predictit-cftc-shut-down-politics-forecasting-gambling.html), [3](https://www.coindesk.com/policy/2021/10/28/the-cftc-vs-the-truth/), [4](https://www.chicagotribune.com/opinion/commentary/ct-opinion-political-prediction-markets-public-discourse-20220906-lfuvziy3fnfkfgw33lzhsno4h4-story.html), etc.), and on Kalshis. I particularly liked [this article](https://www.chicagotribune.com/opinion/commentary/ct-opinion-political-prediction-markets-public-discourse-20220906-lfuvziy3fnfkfgw33lzhsno4h4-story.html) ([a](http://web.archive.org/web/20220907164742/https://www.chicagotribune.com/opinion/commentary/ct-opinion-political-prediction-markets-public-discourse-20220906-lfuvziy3fnfkfgw33lzhsno4h4-story.html)) on the Chicago Tribune on how prediction markets are an antidote to degraded public discourse. 
Kalshi has an [interesting newsletter issue](https://www.kalshikit.co/p/obamas-cabinet-used-prediction-markets) ([a](https://web.archive.org/web/20221012110423/https://www.kalshikit.co/p/obamas-cabinet-used-prediction-markets)) in which they briefly report on how the Obama administration used prediction markets for their decision-making. Note that these would have probably been PredictIt's markets.
### Metaculus
Per their newsletter, Metaculus reached 1 million predictions. They have also [reorganized](https://nitter.privacy.com.de/fianxu/status/1569537636917825536) as a [public benefit corporation](https://en.wikipedia.org/wiki/Benefit_corporation) ([a](http://web.archive.org/web/20221001234507/https://en.wikipedia.org/wiki/Benefit_corporation)), i.e., a for-profit entity that aims to pursue some positive impact, as distinct from shareholder value. I think this leaves Metaculus in a better position, and decreases the (already pretty small) chance that Metaculus starts doing some damaging gatekeeping, etc.
Metaculus is also building an AI Forecasting team, and hiring for [a number of positions](https://apply.workable.com/metaculus/) ([a](http://web.archive.org/web/20220913093930/https://apply.workable.com/metaculus/)), growing its 12-person [strong team](https://www.metaculus.com/about/) ([a](http://web.archive.org/web/20220925082358/https://www.metaculus.com/about/)), presumably using its [2022 Open Philanthropy Grant](https://www.openphilanthropy.org/grants/metaculus-platform-development/) ([a](http://web.archive.org/web/20220929072721/https://www.openphilanthropy.org/grants/metaculus-platform-development/)).
### Manifold Markets
Manifold continued having a high development speed, e.g., they added a [Twitch bot](https://manifold.markets/twitch) ([a](http://web.archive.org/web/20221005181649/https://manifold.markets/twitch)) and ran their [first tournaments](https://manifold.markets/tournaments) ([a](https://web.archive.org/web/20221012144555/https://manifold.markets/tournaments)), which I was really glad to see. They have an experimental projects page at [manifold.markets/labs](https://manifold.markets/labs) ([a](http://web.archive.org/web/20221005182149/https://manifold.markets/labs)) And they have added a few reputational features:
> If a resolved market receives enough reports relative to the number of traders, it will be considered a “bad” market. Creators with enough bad markets will have a warning next to their name on any of their markets. This is just a first step towards reputational features which is a highly requested feature.
Manifold Markets removed and deprioritized their [numeric markets](https://news.manifold.markets/p/above-the-fold-updates-and-join-our) ([a](http://web.archive.org/web/20220908215157/https://news.manifold.markets/p/above-the-fold-updates-and-join-our)), citing difficulties in user usage. But from the post, the decision to do so seems like it was evaluated on the wrong grounds: It's not that numeric markets will immediately prove popular and intuitive, it's that experimenting with them is a public good that could unlock value in the medium term.
More generally, as Ive been seeing in these past few years, I think that there is a huge attractor of sports and wall-street-type bets. And new prediction-market startups tend to flirt with these a bit. I think this is a mistake, because its hard to differentiate oneself from competitors on the basis of better sports betting: traditional sports betting houses like Betfair in Europe or DraftKings in the US are already catering to a similar user base. Instead, my recommendation would be to target virgin communities, to which already existing betting houses dont already cater. 
You can also see their job board [here](https://www.notion.so/Manifold-Markets-Job-Board-e1b932b3bb2c4ec2b5a95865ec8f0f61) ([a](https://web.archive.org/web/20221012093824/https://www.notion.so/Manifold-Markets-Job-Board-e1b932b3bb2c4ec2b5a95865ec8f0f61)).
### Squiggle
[Squiggle](https://www.squiggle-language.com/#code=eNqrVirOyC8PLs3NTSyqVLIqKSpN1QELuaZkluQXQURqARlkDng%3D) is a web-capable language for manipulating probabilities and probability distributions that we at the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/) have been working on. In August, we announced a $1k [Squiggle experimentation prize](https://forum.effectivealtruism.org/posts/ZrWuy2oAxa6Yh3eAw/usd1-000-squiggle-experimentation-challenge), which has now been resolved. Winners are:
* 1st prize of $600 to [Tanae Rao](https://twitter.com/tanaerao?lang=en-GB) for [Adding Quantified Uncertainty to GiveWell's Cost Effectiveness Analysis of the Against Malaria Foundation](https://forum.effectivealtruism.org/posts/4Qdjkf8PatGBsBExK/adding-quantified-uncertainty-to-givewell-s-cost)
* 2nd prize of $300 to [Dan Wahl](https://danwahl.net/) for [CEA LEEP Malawi](https://forum.effectivealtruism.org/posts/BK7ze3FWYu38YbHwo/squiggle-experimentation-challenge-cea-leep-malawi)
* 3rd prize of $100 to [Erich Grunewald](https://www.erichgrunewald.com/posts/how-many-effective-altruist-billionaires-five-years-from-now/) for [How many EA billionaires five years from now?](https://forum.effectivealtruism.org/posts/Ze2Je5GCLBDj3nDzK/how-many-ea-billionaires-five-years-from-now)
Congrats! 
We also announced a larger [$5k challenge to quantify the impact of 80,000 hours' top career paths](https://forum.effectivealtruism.org/posts/noDYmqoDxYk5TXoNm/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top). I think that participation in this contest has a fairly high value, but also a fairly high expected monetary value: I invite readers to do a quick estimation, e.g.: the contest will have 3 to 15 participants, implying each participant will get between ~$300 and $1.6k.
I also wrote two posts introducing Squiggle: [Simple estimation examples in Squiggle](https://forum.effectivealtruism.org/posts/vh3YvCKnCBp6jDDFd/simple-estimation-examples-in-squiggle) and a follow-up at [Five slightly more hardcore Squiggle models.](https://forum.effectivealtruism.org/posts/BDXnNdBm6jwj6o5nc/five-slightly-more-hardcore-squiggle-models)
### Odds and Ends
The FTX Future Fund announces a [$1M+ prize](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize) ([a](https://web.archive.org/web/20221002051012/https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize)) for arguments that shift their probabilities around AGI timelines and dangers.
Friend of the newsletter Walter Frick has started a [newsletter](https://nonrival.pub/) ([a](https://web.archive.org/web/20221005181423/https://nonrival.pub/))  that combines analysis of a newsworthy topic with an invitation and a prompt for readers to forecast on a related event. The newsletter then reports readers forecasts and resolves them when time comes due. Readers might remember Walter from [his excellent coverage of the shutdown of Facebooks Forecast platform](https://qz.com/2069284/facebook-is-shutting-down-its-experimental-app-forecast/) ([a](https://web.archive.org/web/20220730061335/https://qz.com/2069284/facebook-is-shutting-down-its-experimental-app-forecast/)) at Quartz.
The [Autocast competition](https://forecasting.mlsafety.org/) ([a](http://web.archive.org/web/20221011173753/https://forecasting.mlsafety.org/)) offers $625k in prizes for improving the forecasting abilities of machine learning models. This builds on the [Autocast](https://arxiv.org/abs/2206.15474) ([a](http://web.archive.org/web/20220914001702/https://arxiv.org/abs/2206.15474)) paper. It might be that the contest has a connection to AI safety, but I'm not really seeing it. The deadline to submit results for the warmup round is February 10th.
Adam Sherman reports on his frustrations with the [UMA project](https://twitter.com/Squee451/status/1579647834957451264) ([a](https://archive.org/details/uma-unreliable-market-assumption-protocol)). These rhyme somewhat with previous complaints about [Kleros](https://deepfivalue.substack.com/p/the-kleros-experiment-has-failed) ([a](https://web.archive.org/web/20220701003955/https://deepfivalue.substack.com/p/the-kleros-experiment-has-failed)). Abstracting away from the specifics, the UMA oracle is a [Keynesian Beauty Contest](https://en.wikipedia.org/wiki/Keynesian_beauty_contest), meaning that consensus is valued over truth. In this case, a powerful but not dictatorial participant announced that he was going to vote one way, and because the protocol rewards people who vote with the consensus, he convinced others to vote with him. My sense is that a Keynesian Beauty Contest might still be a worthy tradeoff for some crypto protocols because of the added decentralization. But if too many of these events happen, the tradeoff might stop being worth it.
[Quantified Intuitions](https://www.quantifiedintuitions.org/) is an [epistemics training website](https://forum.effectivealtruism.org/posts/W6gGKCm6yEXRW5nJu/quantified-intuitions-an-epistemics-training-website). Readers might be familiar with the [pastcasting](https://www.pastcasting.com/) app, by the same group.
The Social Science prediction platform has [added some large-for-graduate-students forecaster incentives](https://socialscienceprediction.org/forecaster_incentives) ([a](http://web.archive.org/web/20220916011552/https://socialscienceprediction.org/forecaster_incentives)). They are offering $100 per 10 surveys completed—a survey is usually just a set of predictions that will be used in a future paper. I welcome this development. I used to view it as annoying that participation was restricted to graduate students and faculty. But the thought came to mind that restriction to academics is just a socially acceptable—if coarse—way of selecting for intelligence without saying as much.
Reddit has [r/polls/predictions](https://www.reddit.com/r/polls/predictions/) ([a](http://web.archive.org/web/20220709055805/https://www.reddit.com/r/polls/predictions/)), an embryonic implementation of a prediction market tournament inside Reddit. This builds on Reddit's past prediction functionality, as reported [previously](https://forecasting.substack.com/p/forecasting-newsletter-july-2021) ([a](http://web.archive.org/web/20211229170227/https://forecasting.substack.com/p/forecasting-newsletter-july-2021)) in [this newsletter](https://forecasting.substack.com/p/forecasting-newsletter-october-2021) ([a](http://web.archive.org/web/20220217162710/https://forecasting.substack.com/p/forecasting-newsletter-october-2021)). It would be useful to talk to whoever is building this functionality at Reddit. They probably have some different goals, more geared towards being a social media site. But some cross-pollination might still be interesting.
The Swift Centre has an analysis of [Biden's chances in the 2024 election](https://www.swiftcentre.org/can-biden-win-in-2024/) ([a](http://web.archive.org/web/20220916112924/https://www.swiftcentre.org/can-biden-win-in-2024/)). See also some other forecasts on [Metaforecast](https://metaforecast.org/?query=US+president) ([a](https://archive.ph/4n30X#from=https://metaforecast.org/?query=US+president)), e.g., on [Polymarket](https://polymarket.com/market/will-joe-biden-win-the-us-2024-democratic-presidential-nomination) ([a](http://web.archive.org/web/20220128214008/https://polymarket.com/market/will-joe-biden-win-the-us-2024-democratic-presidential-nomination)) or on [Betfair](https://www.betfair.com/exchange/plus/politics/market/1.178176964) ([a](http://web.archive.org/web/20210831231714/https://www.betfair.com/exchange/plus/politics/market/1.178176964)).
[Craze](https://www.ycombinator.com/companies/craze) ([a](https://web.archive.org/web/20221012093558/https://www.ycombinator.com/companies/craze)) is a Y-Combinator-funded company which brings predictions markets to India.
I was surprised to see that famous rapper Nicki Minaj has [partnered](https://maximbet.com/nicki-minaj) ([a](http://web.archive.org/web/20220531210904/https://maximbet.com/nicki-minaj)) with a [sports](https://nitter.privacy.com.de/nickiminaj/status/1531670747399065600) ([a](https://web.archive.org/web/20221012110813/https://nitter.privacy.com.de/nickiminaj/status/1531670747399065600)) betting [site](https://nitter.privacy.com.de/nickiminaj/status/1531670747399065600) ([a](https://web.archive.org/web/20221012110813/https://nitter.privacy.com.de/nickiminaj/status/1531670747399065600)). Curious
INFER continues to have small-money incentives for forecasters, and sending me [mildly cringy emails](https://i.imgur.com/j0Ar3BH.png) ([a](http://web.archive.org/web/20221012093838/https://i.imgur.com/j0Ar3BH.png)), and talking about a ["Global AI Race"](https://mailchi.mp/cultivatelabs/cset-foretell-launch-9372521) ([a](http://web.archive.org/web/20221012112754/https://mailchi.mp/cultivatelabs/cset-foretell-launch-9372521)). I'd continue to recommend it for university students, because it's one of the few sites that have a team functionality, though.
On Good Judgment Open, [Will Amazon.com begin to accept any cryptocurrency for purchases on the US site before 1 October 2022?](https://www.gjopen.com/questions/2090-will-amazon-com-begin-to-accept-any-cryptocurrency-for-purchases-on-the-us-site-before-1-october-2022) ([a](http://web.archive.org/web/20220529175114/https://www.gjopen.com/questions/2090-will-amazon-com-begin-to-accept-any-cryptocurrency-for-purchases-on-the-us-site-before-1-october-2022)) just resolved negatively. I remember it being at 30% a year ago. Crazy times.
## Research
### Shortform
Nostalgebraist looks at [AI forecasting one year in](https://nostalgebraist.tumblr.com/post/695521414035406848/on-ai-forecasting-one-year-in) ([a](http://web.archive.org/web/20220917144833/https://nostalgebraist.tumblr.com/post/695521414035406848/on-ai-forecasting-one-year-in)) and warns against taking it as a [stylized fact](https://en.wikipedia.org/wiki/Stylized_fact) ([a](http://web.archive.org/web/20220927235855/https://en.wikipedia.org/wiki/Stylized_fact)) that AI progress is going faster than forecasters expected.
[Samotsvety Forecasting](https://samotsvety.org/), my forecasting group, looks at the probability of [various AI catastrophes](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts) in the future, and at the [risk of a nuclear bomb being used](https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022) ([a](https://web.archive.org/web/20221012124008/https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022)) in the coming months (see also a [follow-up](https://forum.effectivealtruism.org/posts/8k9iebTHjdRCmzR5i/overreacting-to-current-events-can-be-very-costly) by Kelsey Piper).
2023-07-15 11:54:12 +00:00
<img src="https://images.nunosempere.com/blog/2022/10/12/forecasting-newsletter-september-2022/russia-nuclear-weapon.png" class='.img-medium-center'>
2022-10-30 21:21:57 +00:00
Taken from [Polymarket](https://polymarket.com/market/will-russia-use-a-nuclear-weapon-before-2023). Note that money is worth less in the event of a nuclear war.
Some researchers at the University of Pennsylvania are [looking for forecasters to predict replication outcomes](https://nitter.privacy.com.de/rajtmajer_sarah/status/1573465138300059649) ([a](https://web.archive.org/web/20221012114519/https://nitter.privacy.com.de/rajtmajer_sarah/status/1573465138300059649)). They are paying a $20 base incentive and $25 per market. This is low in absolute terms, but high if you enjoy doing this kind of thing anyways. h/t Ago Lajko.
Richard Hanania argues that [the problem with polling might be unfixable](https://richardhanania.substack.com/p/the-problem-with-polling-might-be/), i.e,. that Republican nonresponse bias might be very hard to estimate. I left a comment with some suggestions, but I agree that the situation [looks grim](https://richardhanania.substack.com/p/the-problem-with-polling-might-be/comment/9327296) ([a](http://web.archive.org/web/20220927183755/https://richardhanania.substack.com/p/the-problem-with-polling-might-be/comment/9327296)).
[Two](https://www.lesswrong.com/posts/YQ8H4e7z3q8ngev7J/raising-the-forecasting-waterline-part-1) ([a](http://web.archive.org/web/20220710073545/https://www.lesswrong.com/posts/YQ8H4e7z3q8ngev7J/raising-the-forecasting-waterline-part-1)) old [posts](https://www.lesswrong.com/posts/YEKHh5nyqhpE3E4Bm/raising-the-forecasting-waterline-part-2) ([a](http://web.archive.org/web/20220927155721/https://www.lesswrong.com/posts/YEKHh5nyqhpE3E4Bm/raising-the-forecasting-waterline-part-2)) from ten years ago look at the lessons learnt by someone who was participating in the IARPA forecasting tournament which led to the Superforecasting book.
### Longform
Dan Luu looks at the track record of futurists, and finds that their track record is generally poor. Readers of this newsletter should [read the post](https://danluu.com/futurist-predictions/) ([a](https://archive.ph/WJEBd#from=https://danluu.com/futurist-predictions/)).
For some background points:
* The AI safety community has been advocating that future artificial intelligence systems (AI) might be so intelligent as to be world-ending dangers.
* Open Philanthropy, a large foundation, is giving some weight to AI safety, and has been donating large amounts of money to that cause.
* As part of their decision-making, Open Philanthropy commissioned research by [friends of the newsletter Arb research](https://arbresearch.com/) ([a](http://web.archive.org/web/20221011153414/https://arbresearch.com/)) on the [track record of the three biggest science-fiction authors of the 20th century](https://arbresearch.com/files/big_three.pdf) ([a](http://web.archive.org/web/20220711161231/https://arbresearch.com/files/big_three.pdf)) (Asimov, Heinlein and Clarke)
* The CEO of Open Philanthropy later [used that analysis](https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/) ([a](http://web.archive.org/web/20220914130350/https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/)), as well as other research Open Philanthropy had been doing, to justify and explain Open Philanthropy's investments in AI Safety.
In his own analysis of futurists' track record, Dan Luu seems to point out that this process has some characteristics of a shit show. Here is a long extract from Luu's post, minimally edited for readability:
> We've seen, when evaluating futurists with an eye towards evaluating longtermists, Karnofsky heavily rounds up in the same way Kurzweil and other futurists do, to paint the picture they want to create. 
>
> There's also the matter of his summary of a report on Kurzweil's predictions being incorrect because he didn't notice the author of that report used a methodology that produced nonsense numbers that were favorable to the conclusion that Karnofsky favors. 
>
> It's true that Karnofsky and the reports he cites do the superficial things that the forecasting literature notes is associated with more accurate predictions, like stating probabilities. But for this to work, the probabilities need to come from understanding the data. 
>
> If you take a pile of data, incorrectly interpret it and then round up the interpretation further to support a particular conclusion, throwing a probability on it at the end is not likely to make it accurate. 
>
> Although he doesn't use these words, a key thing Tetlock notes in his work is that people who round things up or down to conform to a particular agenda produce low accuracy predictions. Since Karnofsky's errors and rounding heavily lean in one direction, that seems to be happening here.
> We can see this in other analyses as well. Although digging into material other than futurist predictions is outside of the scope of this post, nostalgebraist has done this and he said (in a private communication that he gave me permission to mention) that Karnofsky's summary of [Could Advanced AI Drive Explosive Economic Growth?](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/) is substantially more optimistic about AI timelines than the underlying report in that there's at least one major concern raised in the report that's not brought up as a "con" in Karnofsky's summary.
>
> And nostalgebraist later wrote [this post](https://nostalgebraist.tumblr.com/post/693718279721730048/on-bio-anchors) ([a](http://web.archive.org/web/20221004024842/https://nostalgebraist.tumblr.com/post/693718279721730048/on-bio-anchors)), where he (implicitly) notes that the methodology used in a report he examined in detail is fundamentally not so different than what the futurists we discussed used. There are quite a few things that may make the report appear credible (it's hundreds of pages of research, there's a complex model, etc.), but when it comes down to it, the model boils down to a few simple variables. 
>
> In particular, a huge fraction of the variance of whether or not TAI is likely or not likely comes down to the amount of improvement will occur in terms of hardware cost, particularly FLOPS/$. The output of the model can range from 34% to 88% depending how much improvement we get in FLOPS/$ after 2025. Putting in arbitrarily large FLOPS/$ amounts into the model, i.e., the scenario where infinite computational power is free (since other dimensions, like storage and network aren't in the model, let's assume that FLOPS/$ is a proxy for those as well), only pushes the probability of TAI up to 88%, which I would rate as too pessimistic, although it's hard to have a good intuition about what would actually happen if infinite computational power were on tap for free. 
>
> Conversely, with no performance improvement in computers, the probability of TAI is 34%, which I would rate as overly optimistic without a strong case for it. But I'm just some random person who doesn't work in AI risk and hasn't thought about too much, so your guess on this is as good as mine (and likely better if you're the equivalent of Yegge or Gates and work in the area).
I'm sympathetic to both sides of this. 
On the one hand, I worry that the side concerned about AI safety acts like a machine that predictably surfaces and amplifies arguments in favor of its side, and predictably discounts arguments for the other side. 
On the other hand, I also see Luu's analysis as perhaps too harsh, e.g.:
* not giving partial credit for predictions that are missed by a few years or that only happen in rich countries rather than worldwide,
* considering predictions that have a "may" as unfalsifiable (instead of e.g., assigning a probability of 50% and looking at the resulting Brier or log score), 
* evaluating two propositions connected by an "and" as one failed prediction instead of one correct and one incorrect prediction.
* evaluating predictions about the "twenty-first century" as having already failed
* generally being on the harsh side of things
Overall, it seems like there is a garden of forking paths with regards to the more specific question of how accurate past futurists were, but also with regards to the more general question about the degree to which it is possible to make predictions about future events, particularly about transformative technologies. 
One way to navigate that garden of forking paths would be an [adversarial collaboration](https://en.wikipedia.org/wiki/Adversarial_collaboration) ([a](http://web.archive.org/web/20220725190412/https://en.wikipedia.org/wiki/Adversarial_collaboration)). Funding for this would probably be available, if not from Open Philanthropy itself then from [the FTX Future Fund](https://ftxfuturefund.org/) ([a](http://web.archive.org/web/20221011034322/https://ftxfuturefund.org/)), from [Nonlinear](https://www.super-linear.org/#list2) ([a](https://web.archive.org/web/20221012112602/https://www.super-linear.org/#list2)), or even from [myself](https://nitter.privacy.com.de/NunoSempere). I mention funding because I personally view cold hard cash as an honest signal that some work is truly perceived to be valuable. But one could also choose to carry out an adversarial collaboration pro bono, for the sake of curiosity, etc.
[Price Formation in Field Prediction Markets](https://arxiv.org/abs/2209.08778) is an arxiv preprint which discusses where the accuracy of prediction markets comes from. The two hypotheses it considers are:
1. from averaging the different pieces of information that each participant has
2. from traders which are able to individually do more research than everyone else, and profit from this.
They have a method I'm not completely convinced by in order to identify "price sensitive" traders, whom they identify with informed traders, and they use their dataset to conclude that hypothesis 2 is mostly whats going on. They use data from [Almanis](https://www.almanisprivate.com/) ([a](http://web.archive.org/web/20220202051215/https://www.almanisprivate.com/)), one of the smaller prediction market sites that still have some liquidity.
The paper has some interesting elements. And for all I know, it's better than 99% of the papers in its field. But I'm left with the impression that the topic of research is a bit of a bad fit for academic investigation, because one could get a better idea of the dynamics of prediction markets by listening to the [Star Spangled Gamblers](https://starspangledgamblers.com/) ([a](http://web.archive.org/web/20221001143818/https://starspangledgamblers.com/)) guys.
---
Note to the future: All links are added automatically to the Internet Archive, using this [tool](https://github.com/NunoSempere/longNowForMd) ([a](http://web.archive.org/web/20220711161908/https://github.com/NunoSempere/longNowForMd)). "(a)" for archived links was inspired by [Milan Griffes](https://www.flightfromperfection.com/) ([a](http://web.archive.org/web/20220814131834/https://www.flightfromperfection.com/)), [Andrew Zuckerman](https://www.andzuck.com/) ([a](http://web.archive.org/web/20220316214638/https://www.andzuck.com/)), and [Alexey Guzey](https://guzey.com/) ([a](http://web.archive.org/web/20220901135024/https://guzey.com/)).
---
> — What are you waiting for?
> — I don't know... Something amazing, I guess.
> — Me too, kid
2022-11-05 14:35:15 +00:00
[The Incredibles](https://en.wikipedia.org/wiki/The_Incredibles), 30'50''