nunosempere.com/blog/2020/11/15/announcing-the-forecasting-innovation-prize/index.md

41 lines
6.4 KiB
Markdown
Executable File
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Announcing the Forecasting Innovation Prize
==============
## Motivation
There is already a fair amount of interest around Effective Altruism in judgemental forecasting. We think theres a whole lot of good research left to be done.
The valuable research seems to be all over the place. We could use people to speculate on research directions, outline incentive mechanisms, try novel forecasting questions with friends, and outline new questions that deserve forecasts. Some of this requires a fair amount of background knowledge, but a lot doesnt. 
The EA and LW communities have a history of using [prizes](https://forum.effectivealtruism.org/posts/GseREh8MEEuLCZayf/nunosempere-s-shortform?commentId=WPStS4qhJS7Mz6KCA) to encourage work in exciting areas. Were going to try one in forecasting research. If this goes well, wed like to continue and expand this going forward.
## Prize
This prize will total $1000 between multiple recipients, with a minimum first place prize of $500. We will aim for 2-5 recipients in total. The prize will be paid for by the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/) (QURI).
## Rules
To enter, first make a public post online between now and Jan 1, 2021. We encourage you to either post directly or make a link post to either LessWrong or the EA Forum. Second, complete [this form](https://docs.google.com/forms/d/e/1FAIpQLSdDQg31F3v-QYEvSXp7Oahg-qagigN4PPXediYGWYKaPDD3Lg/viewform?usp=sf_link), also before Jan 1, 2021. 
## Research Feedback
If youd like feedback or would care to discuss possible research projects, please do reach out! To do so, fill out [this form](https://docs.google.com/forms/d/e/1FAIpQLSf13ZDuz1uERMl_Se2VOiKQzPn2AhQTJOkJpbCy7uSFh3cUOg/viewform?usp=sf_link). Were happy to advise at any stages of the process. 
## Judges
The judges will be [AlexRJL](https://forum.effectivealtruism.org/users/alexrjl), [Nuño Sempere](https://forum.effectivealtruism.org/users/nunosempere), [Eric Neyman](https://sites.google.com/view/ericneyman/), [Tamay Besiroglu](https://forum.effectivealtruism.org/users/tamay), [Linch Zhang](https://forum.effectivealtruism.org/users/linch) and [Ozzie Gooen](https://forum.effectivealtruism.org/users/oagr). The details of the judging process will vary depending on how many submissions we get. Well try to select winners for their importance, novelty, and presentation.
## Some Possible Research Areas
Areas of work we would be excited to see explored:
* Operationalizing questions in important domains so that they can be predicted in e.g., Metaculus. This is currently a significant bottleneck; its surprisingly difficult to write good questions. Examples in the past have been the [Ragnarök](https://www.metaculus.com/questions/?search=cat:series--ragnarok) or the [Animal Welfare series](https://www.metaculus.com/questions/3068/animal-welfare-series/). A possible suggestion might be to try to come up with forecastable [fire alarms](https://www.lesswrong.com/posts/kvSLHYY5igtixEqMB/fire-alarm-for-agi) for AGI. Tamay Besiroglu has suggested a “S&P 500 but for AI forecasts,” i.e., a group of forecasting questions which track something useful for AI (or for other domains.)
* Small experiments where you and/or a group of people use forecasting for your own decision making, and write up what youve learned. For example, set up a [Foretold](https://www.foretold.io/) community to decide on which research document you want to write up next. [Predictions as a Substitute for Reviews](https://acesounderglass.com/2020/08/06/predictions-as-a-substitute-for-reviews/) is an example here.
* New forecasting approaches, or forecasting tools being used in new and interesting ways, or applied to new domains. For example, [Amplifying generalist research via forecasting](https://www.lesswrong.com/posts/FeE9nR7RPZrLtsYzD/amplifying-generalist-research-via-forecasting-results-from), or Oughts [AI timelines forecasting thread](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines).
* Estimable or [gears-level](https://www.lesswrong.com/posts/B7P97C27rvHPz3s9B/gears-in-understanding) models of the world that are well positioned to be used in forecasting. For example, a decomposition informed by ones own expertise of a difficult question into smaller questions, each of which can be then forecasted. Recent work by [CSET-foretell](https://cset.georgetown.edu/wp-content/uploads/CSET-Future-Indices.pdf) would be an example of this.
* Suggestions for or basic implementation of better tooling for forecasters, like a Bayes rule calculator for considering many pieces of evidence, a Laplace law calculator, etc.
* New theoretical schemes which propose solutions to current problems around forecasting. For a recent example, see [Time Travel Markets for Intellectual Accounting](https://www.lesswrong.com/posts/DonsyZjFMgsXnZAFX/time-travel-markets-for-intellectual-accounting).
* Elicitation of expert forecasters of useful questions. For example, the [probabilities](https://forum.effectivealtruism.org/posts/Z5KZ2cui8WDjyF6gJ/some-thoughts-on-toby-ord-s-existential-risk-estimates) of the x-risks outlined in [The Precipice](https://theprecipice.com/).
* Overviews of existing research, or thoughts or reflections on existing prediction tournaments and similar. For example, Zvis posts on prediction markets, [here](https://www.lesswrong.com/posts/a4jRN9nbD79PAhWTB/prediction-markets-when-do-they-work) and [here](https://www.lesswrong.com/posts/k286sEwyuY7SiQjcs/prediction-markets-are-about-being-right).
* Figuring out why some puzzling behavior happens in current prediction markets or forecasting tournaments, like in [Limits of Current US Prediction Markets (PredictIt Case Study)](https://www.lesswrong.com/posts/c3iQryHA4tnAvPZEv/limits-of-current-us-prediction-markets-predictit-case-study). For a new puzzle suggested by Eric Neyman, consider that PredictIt is thought to be limited because it caps trades at $850, has various fees, etc, which makes it not the sort of market that big, informed players can enter and make efficient. But that fails to explain why markets without such caps, such as FTX, have prices similar to PredictIt. So, is PredictIt reasonable or is FTX unreasonable? If the former, why is there such a strong expert consensus against what PredictIt says so often? If the latter, why is FTX unreasonable?
* Comments on existing posts can themselves be very valuable. Feel free to submit a list of good comments instead of one single post.