Forecasting Newsletter August 2020

This commit is contained in:
Nuno Sempere 2020-09-01 15:34:12 +02:00
parent 4cd892c8c6
commit 03d186cde6

View File

@ -0,0 +1,115 @@
## Highlights
538 releases [model](https://projects.fivethirtyeight.com/2020-election-forecast/) of the US elections; Trump predicted to win ~30% of the time.
[Study](https://link.springer.com/article/10.1007%2Fs10654-020-00669-6) offers instructive comparison of New York covid models, finds that for the IHME model, reported death counts fell inside the 95% prediction intervals only 53% of the time.
Biggest decentralized trial [to date](https://blog.kleros.io/kleros-community-update-july-2020/#case-302-the-largest-decentralized-trial-of-all-time), with 511 jurors asked to adjudicate a case coming from the Omen prediction market: "Will there be a day with at least 1000 reported corona deaths in the US in the first 14 days of July?."
## Index
- Highlights
- Prediction Markets & Forecasting Platforms
- In The News
- Hard To Categorize
- Long Content
## Prediction Markets & Forecasting Platforms
On [PredictIt](htps://predictit.org/), presidential election prices are close to [even odds](https://www.predictit.org/markets/detail/3698), with Biden at 55, and Trump at 48.
Good Judgement Inc. continues providing their [dashboard](https://goodjudgment.io/covid-recovery/), and the difference between the probability assigned by superforecasters to a Biden win (~75%), and those offered by [betfair](https://www.betfair.com/sport/politics) (~55%) was enough to make it worth for me to place a small bet. At some point, Good Judgement Inc. and Cultivate Labs started a new platform on the domain [covidimpacts.com](https://www.covidimpacts.com), but forecasts there seem weaker than on Good Judgement Open.
[Replication Markets](https://www.replicationmarkets.com/) started their COVID-19 round, and created a page with COVID-19 [resources for forecasters](https://www.replicationmarkets.com/index.php/frequently-asked-questions/resources-for-forecasters/).
Nothing much to say about [Metaculus](https://www.metaculus.com/questions/) this month, but I appreciated their previously existing list of [prediction resources](https://www.metaculus.com/help/prediction-resources/).
[Foretell](https://www.cset-foretell.com) has a [blog](https://www.cset-foretell.com/blog), and hosted a forecasting forum which discussed
- metrizicing the grand. That is, decomposing and operationalizing big picture questions into smaller ones, which can then be forecasted.
- operationalizing these big picture questions might also help identify disagreements, which might then either be about the indicators, proxies or subquestions chosen, or about the probabilities given to the subquestions.
- sometimes we can't measure what we care about, or we don't care about what we can measure.
- one might be interested in questions about the future 3 to 7 years from now, but questions which ask about events 3 to 15 months in the future (which forecasting tournaments can predict better) can still provide useful signposts.
Meanwhile, ethereum-based prediction markets such as Omen or Augur are experiencing difficulties because of the rise of decentralized finance (DeFi) and speculation and excitement about it. That speculation and excitement has increased the gas price (fees), such that making a casual prediction is for now too costly.
## In The News
[Forecasting the future of philanthropy](https://www.fastcompany.com/90532945/forecasting-the-future-of-philanthropy). The [American Lebanese Syrian Associated Charities](https://en.wikipedia.org/wiki/American_Lebanese_Syrian_Associated_Charities), the largest healthcare related charity in the United States, whose mission is to fund the [St. Jude Children's Research Hospital](https://en.wikipedia.org/wiki/St._Jude_Children%27s_Research_Hospital). To do this, they employ aggressive fundraising tactics, which have undergone modifications throughout the current pandemic.
[Case 302: the Largest Decentralized Trial of All Time](https://blog.kleros.io/kleros-community-update-july-2020/#case-302-the-largest-decentralized-trial-of-all-time). Kleros is a decentralized dispute resolution platform. "In July, Kleros had its largest trial ever where 511 jurors were drawn in the General Court to adjudicate a case coming from the Omen prediction market: Will there be a day with at least 1000 reported Corona death in the US in the first 14 days of July?." [Link to the case](https://court.kleros.io/cases/302)
[ExxonMobil Slashing Permian Rig Count, Forecasting Global Oil Glut Extending Well into 2021](https://www.naturalgasintel.com/exxonmobil-slashing-permian-rig-count-forecasting-global-oil-glut-extending-well-into-2021/). My own interpretation is that the gargantuan multinational's decision is an honest signal of an expected extended economic downturn.
> Supply is expected to exceed demand for months, “and we anticipate it will be well into 2021 before the overhang is cleared and we returned to pre-pandemic levels,” Senior Vice President Neil Chapman said Friday during a conference call.
> “Simply put, the demand destruction in the second quarter was unprecedented in the history of modern oil markets. To put it in context, absolute demand fell to levels we havent seen in nearly 20 years. Weve never seen a decline with this magnitude and pace before, even relative to the historic periods of demand volatility following the global financial crisis and as far back as the 1970s oil and energy crisis.”
> Even so, ExxonMobils Permian rig count is to be sharply lower than it was a year ago. The company had more than 50 rigs running across its Texas-New Mexico stronghold as of last fall. At the end of June it was down to 30, “and we expect to cut that number by at least half again by the end of this year,” Chapman said.
[Google Cloud AI and Harvard Global Health Institute Collaborate on new COVID-19 forecasting model](https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-is-releasing-the-covid-19-public-forecasts).
[Betting markets](https://smarkets.com/event/40554343/politics/uk/brexit/trade-deals) put [UK-EU trade deal in 2020 at 66%](https://sports.yahoo.com/betting-odds-put-ukeu-trade-deal-in-2020-at-66-095009521.html) (now 44%).
[Experimental flood forecasting system didnt help](https://www.hindustantimes.com/mumbai-news/flood-forecasting-system-didn-t-help/story-mJanM39kxJPOvFma6TeqUM.html) in Mumbai. The system was to provide a three day advance warning, but didn't.
FiveThirtyEight covers various facets of the USA elections: [Biden Is Polling Better Than Clinton At Her Peak](https://fivethirtyeight.com/features/biden-is-polling-better-than-clinton-at-her-peak/), and releases [their model](https://fivethirtyeight.com/features/how-fivethirtyeights-2020-presidential-forecast-works-and-whats-different-because-of-covid-19/), along with some [comments about it](https://fivethirtyeight.com/features/our-election-forecast-didnt-say-what-i-thought-it-would/)
In other news, this newsletter reached 200 subscribers last week.
## Hard to Categorize
[Groundhog day](https://en.wikipedia.org/wiki/Groundhog_Day) is a tradition in which American crowds pretend to believe that a small rat has oracular powers.
[Tips](https://politicalpredictionmarkets.com/blog/) for forecasting on PredictIt. These include betting against Trump voters who arrive at PredictIt from Breitbart.
Linch Zhang asks [What are some low-information priors that you find practically useful for thinking about the world?](https://forum.effectivealtruism.org/posts/SBbwzovWbghLJixPn/what-are-some-low-information-priors-that-you-find)
[AstraZeneca looking for a Forecasting Director](https://careers.astrazeneca.com/job/wilmington/forecasting-director-us-renal/7684/16951921) (US-based).
[Genetic Engineering Attribution Challenge](https://www.drivendata.org/competitions/63/genetic-engineering-attribution/).
NSF-funded tournament looking to compare human forecasters with a random forest ML model from Johns Hopkins in terms of forecasting the success probability of cancer drug trials. More info [here](https://www.fandm.edu/magazine/magazine-issues/spring-summer-2020/spring-summer-2020-articles/2020/06/10/is-there-a-better-way-to-predict-the-future), and one can sign-up [here](https://www.pytho.io/human-forest). I've heard rewards are generous, but they don't seem to be specified on the webpage. Kudos to Joshua Monrad.
Results of an [expert forecasting session](https://twitter.com/juan_cambeiro/status/1291153289879392257) on covid, presented by expert forecaster Juan Cambeiro.
A playlist of [podcasts related to forecasting](https://open.spotify.com/playlist/4LKES4QcFNozmwImjHWrBX?si=twuBPF-fSxejbpMwUToatg). Kudos to Michał Dubrawski.
## Long Content
[A case study in model failure? COVID-19 daily deaths and ICU bed utilization predictions in New York state](https://link.springer.com/article/10.1007%2Fs10654-020-00669-6) and commentary: [Individual model forecasts can be misleading, but together they are useful](https://link.springer.com/article/10.1007/s10654-020-00667-8).
> In this issue, Chin et al. compare the accuracy of four high profile models that, early during the outbreak in the US, aimed to make quantitative predictions about deaths and Intensive Care Unit (ICU) bed utilization in New York. They find that all four models, though different in approach, failed not only to accurately predict the number of deaths and ICU utilization but also to describe uncertainty appropriately, particularly during the critical early phase of the epidemic. While overcoming these methodological challenges is key, Chin et al. also call for systemic advances including improving data quality, evaluating forecasts in real-time before policy use, and developing multi-model approaches.
> But what the model comparison by Chin et al. highlights is an important principle that many in the research community have understood for some time: that no single model should be used by policy makers to respond to a rapidly changing, highly uncertain epidemic, regardless of the institution or modeling group from which it comes. Due to the multiple uncertainties described above, even models using the same underlying data often have results that diverge because they have made different but reasonable assumptions about highly uncertain epidemiological parameters, and/or they use different methods
> .. the rapid deployment of this approach requires pre-existing infrastructure and evaluation systems now and for improved response to future epidemics. Many models that are built to forecast on a scale useful for local decision making are complex, and can take considerable time to build and calibrate
> a group with a history of successful influenza forecasting in the US (Los Alamos National Lab (4)) was able to produce early COVID-19 forecasts and had the best coverage of uncertainty in the Chin et al. analysis (80-100% of observations fell within the 95% prediction interval for most forecasts). In contrast, the new Institute for Health Metrics and Evaluation statistical approach had low reliability; after the latest analyzed revision only 53% of reported death counts fell with the 95% prediction intervals.
> The original IHME model underestimates uncertainty and 45.7% of the predictions (over 1- to 14-step-ahead predictions) made over the period March 24 to March 31 are outside the 95% PIs. In the revised model, for forecasts from of April 3 to May 3 the uncertainty bounds are enlarged, and most predictions (74.0%) are within the 95% PIs, which is not surprising given the PIs are in the order of 300 to 2000 daily deaths. Yet, even with this major revision, the claimed nominal coverage of 95% well exceeds the actual coverage. On May 4, the IHME model undergoes another major revision, and the uncertainty is again dramatically reduced with the result that 47.4% of the actual daily deaths fall outside the 95% PIs—well beyond the claimed 5% nominal value.
> the LANL model was the only model that was found to approach the 95% nominal coverage, but unfortunately this model was unavailable at the time Governor Cuomo needed to make major policy decisions in late March 2020.
> Models that are consistently poorly performing should carry less weight in shaping policy considerations. Models may be revised in the process, trying to improve performance. However, improvement of performance against retrospective data offers no guarantee for continued improvement in future predictions. Failed and recast models should not be given much weight in decision making until they have achieved a prospective track record that can instill some trust for their accuracy. Even then, real time evaluation should continue, since a model that performed well for a given period of time may fail to keep up under new circumstances.
[Do Prediction Markets Produce WellCalibrated Probability Forecasts?](https://academic.oup.com/ej/article-abstract/123/568/491/5079498).
> Abstract: This article presents new theoretical and empirical evidence on the forecasting ability of prediction markets. We develop a model that predicts that the time until expiration of a prediction market should negatively affect the accuracy of prices as a forecasting tool in the direction of a favourite/longshot bias. That is, highlikelihood events are underpriced, and lowlikelihood events are overpriced. We confirm this result using a large data set of prediction market transaction prices. Prediction markets are reasonably well calibrated when time to expiration is relatively short, but prices are significantly biased for events farther in the future. When time value of money is considered, the miscalibration can be exploited to earn excess returns only when the trader has a relatively low discount rate.
> We confirm this prediction using a data set of actual prediction markets prices from1,787 market representing a total of more than 500,000 transactions
Paul Christiano on [learning the Prior](https://ai-alignment.com/learning-the-prior-48f61b445c04) and on [better priors as a safety problem](https://ai-alignment.com/better-priors-as-a-safety-problem-24aa1c300710).
A presentation of [radical probabilism](https://www.lesswrong.com/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1); a theory of probability which relaxes some assumptions in classical Bayesian reasoning.
[Forecasting Thread: AI timelines](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines), which asks for (quantitative) forecasts until human-machine parity. Some of the answers seem insane or suspicious, in that they have very narrow tails, sharp spikes, and don't really update on the fact that other people disagree with them.
***
Note to the future: All links are added automatically to the Internet Archive. In case of link rot, go [there](https://archive.org/) and input the dead link.
***
> *We hope that people will pressure each other into operationalizing their [big picture outlooks]. If we have no way of proving you wrong, we have no way of proving you right. We need falsifiable forecasts.*
> Source: Foretell Forecasting Forum. Inexact quote.
***