Whatever happened to forecasting? May 2020 [Draft]

A forecasting digest with a focus on experimental forecasting. You can sign up here. The newsletter itself is experimental, but there will be at least five more iterations.

Index

Prediction Markets & Forecasting platforms.

Augur: augur.net

Augur is a decentralized prediction market. Here is a fine piece of reporting outlining how it operates and the road ahead.

Predict It & Election Betting Odds: predictIt.org & electionBettingOdds.com

PredictIt is a prediction platform restricted to US citizens or those who bother using a VPN. This month, they have a badass map about the election college result in the USA. States are colored according to the market prices:

Some of the predictions I found most interesting follow. The market probabilities can be found below; the engaged reader might want to annotate their probabilities and then compare.

Some of the interesting and wrong ones are:

Market odds are: 80%, 15%, 69%, 79%, 8%, 2%, 7%, 11%.

Further, the following two markets are plain inconsistent:

Election Betting Odds aggregates PredictIt with other such services for the US presidential elections. The creators of the webpage used its visibility to promote ftx.com, another platform in the area. They also have an election map.

Replication Markets: replicationmarkets.com

Replication Markets is a project where volunteer forecasters try to predict whether a given study's results will be replicated with high power. Rewards are monetary, but only given out to the top N forecasters, and markets suffer from sometimes being dull.

The first week of each round is a survey round, which has some aspects of a Keynesian beauty contest, because it's the results of the second round, not the ground truth, what is being forecasted. This second round then tries to predict what would happen if the studies were in fact subject to a replication, which a select number of studies then undergo.

There is a part of me which dislikes this setup: here was I, during the first round, forecasting to the best of my ability, when I realize that in some cases, I'm going to improve the aggregate and be punished for this, particularly when I have information which I expect other market participants to not have.

At first I thought that, cunningly, the results of the first round are used as priors for the second round, but a programming mistake by the organizers revealed that they use a simple algorithm: claims with p < .001 start with a prior of 80%, p < .01 starts at 40%, and p < .05 starts at 30%.

Coronavirus Information Markets: coronainformationmarkets.com

For those who want to put their money where their mouth is, a prediction market for coronavirus related information popped out.

Making forecasts is tricky, so would-be-bettors might be better off pooling their forecasts. As of the middle of this month, the total trading volume sits at a $20k (from 8k last month), and some questions have been resolved already.

Foretold: foretold.io & EpidemicForecasting (c.o.i)

Foretold has continued their partnership with Epidemic Forecasting, gathering a team of superforecasters to advise governments around the world which wouldn't otherwise have the capacity. They further shipped a report to a vaccine company analyzing the suitability of different locations for human trials, aggregating more than 1000 individual forecasts.

Metaculus: metaculus.com

Metaculus is a forecasting platform with an active community and lots of interesting questions. In their May pandemic newsletter, they emphasized having "all the benefits of a betting market but without the actual betting", which I found pretty funny.

Yet consider that if monetary prediction markets were more convenient to use, and less dragged down by regulatory hurdles in the US, they could have been scaled up much more quickly during the pandemic.

Instead, the job fell to Metaculus; this month they've organized a flurry of activities, most notably:

On the negative side, they haven't fixed the way users input their distribution, restricting it to stacking up to 5 gaussians on top of each other, which limits expressiveness.

/(Good Judgement?[^]*)|(Superforecast(ing|er))/gi

The title of this section is a regular expression, so as to be maximally unambiguous.

Good Judgement Inc. is the organization which grew out of Tetlock's research on forecasting, and out of the Good Judgement Project, which won the IARPA ACE forecasting competition, and resulted in the research covered in the Superforecasting book.

Good Judgement Inc. also organizes the Good Judgement Open gjopen.com, a forecasting platform open to all, with a focus on serious geopolitical questions. They structure their questions in challenges. Of the currently active questions, here is a selection of those I found interesting (odds below):

Odds: 20%, 75%, 44%, 86%, 19%

On the Good Judgement Inc. side, here is a dashboard presenting forecasts related to covid. The ones I found most worthy are:

Otherwise, for a recent interview with Tetlock, see this podcast, by Tyler Cowen.

CSET: Foretell

The Center for Security and Emerging Technology is looking for forecasters to predict the future to better inform policy decisions. For a more elaborate explanation, and to sign-up when applications open, see their webpage. CSET was previously funded by the Open Philantropy Project, and seems to have a legibly impressive leadership lineup.

In the News

Grab bag

Long content

Vale.

Conflicts of interest: Marked as (c.o.i) throughout the text.
Note to the future: All links are added automatically to the Internet Archive. In case of link rot, go here