Updated forecasting newsletter + srf model

This commit is contained in:
Nuno Sempere 2020-10-01 12:27:24 +02:00
parent 5412dd2904
commit 2526c118c8
4 changed files with 170 additions and 0 deletions

View File

@ -4,6 +4,7 @@ A monthly forecasting newsletter with a focus on experimental forecasting. You c
## Past history ## Past history
- [September 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/September2020)
- [August 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/August2020) - [August 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/August2020)
- [July 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/July2020) - [July 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/July2020)
- [June 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/June2020) - [June 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/June2020)

View File

@ -0,0 +1,168 @@
## Highlights
- Red Cross and Red Crescent societies have been trying out [forecast based financing](https://www.forecast-based-financing.org/our-projects/what-can-go-wrong/), where funds are released before a potential disaster happens based on forecasts thereof.
- Andrew Gelman releases [Information, incentives, and goals in election forecasts](http://www.stat.columbia.edu/~gelman/research/unpublished/forecast_incentives3.pdf); 538's 80% political predictions turn out to have happened [88% of the time](https://projects.fivethirtyeight.com/checking-our-work/).
- Nonprofit Ought organizes a [forecasting thread on existential risk](https://www.lesswrong.com/posts/6x9rJbx9bmGsxXWEj/forecasting-thread-existential-risk-1), where participants display and discuss their probability distributions for existential risk.
## Index
- Highlights
- Prediction Markets & Forecasting Platforms
- In The News
- Hard To Categorize
- Long Content
Sign up [here](https://mailchi.mp/18fccca46f83/forecastingnewsletter) or browse past newsletters [here](https://forum.effectivealtruism.org/s/HXtZvHqsKwtAYP6Y7).
## Prediction Markets & Forecasting Platforms
Metaculus updated their [track record page](https://www.metaculus.com/questions/track-record/). You can now look at accuracy across time, at the distribution of brier scores, and a calibration graph. They also have a new black swan question: [When will US metaculus users face an emigration crisis?](https://www.metaculus.com/questions/5287/when-will-america-have-an-emigration-crisis/).
Good Judgement Open has a [thread](https://www.gjopen.com/questions/1779-are-there-any-forecasting-tips-tricks-and-experiences-you-would-like-to-share-and-or-discuss-with-your-fellow-forecasters) in which forecasters share and discuss tips, tricks and experiences. An account is needed to browse it.
[Augur](https://www.augur.net/blog/amm-para-augur/) modifications in response to higher ETH prices. Some unfiltered comments [on reddit](https://www.reddit.com/r/ethfinance/comments/ixhy3j/daily_general_discussion_september_22_2020/g68yra6/?context=3)
An overview of [PlotX](https://blockonomi.com/plotx-guide/), a new decentralized prediction protocol/marketplace. PlotX focuses on non-subjective markets that can be programmatically determined, like the exchange rate between currencies or tokens.
A Replication Markets participant wrote [What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers](https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/). See also: [An old long-form introduction to Replication Markets](https://www.adamlgreen.com/replication-markets/).
Georgetown's CSET is attempting to use forecasting to influence policy. A seminar discussing their approach [Using Crowd Forecasting to Inform Policy with Jason Matheny](https://georgetown.zoom.us/webinar/register/WN_nlXO7sQdSYyYBqhnzkh3hg) is scheduled for the 19th of October. But their current forecasting tournament, foretell, isn't yet very well populated, and the aggregate isn't that good because participants don't update all that often, leading to sometimes clearly outdated aggregates. Perhaps because of this relative lack of competition, my team is in 2nd place at the time of this writting (with myself at #6, Eli Lifland at #12 and Misha Yagudin at #21). You can join foretell [here](https://www.cset-foretell.com/).
There is a new contest on Hypermind, [The Long Fork Project](https://prod.hypermind.com/longfork/en/welcome.html), which aims to predict the impact of a Trump or a Biden victory in November, with $20k in prize money. H/t to user [ChickCounterfly](https://www.lesswrong.com/posts/hRsRgRcRk3zHLPpqm/forecasting-newsletter-august-2020?commentId=8gAKasi8w5v64QpbM).
The University of Chicago's Effective Altruism group is hosting a forecasting tournament between all interested EA college groups starting October 12th, 2020. More details [here](https://forum.effectivealtruism.org/posts/rePMmgXLwdSuk5Edg/ea-uni-group-forecasting-tournament)
## In the News
News media sensationalizes essentially random fluctuations on US election odds caused by big bettors entering prediction markets such as Betfair, where bets on the order of $50k can visibly alter the market price. Simultaneously, polls/models and prediction market odds have diverged, because a substantial fraction of bettors lend credence to the thesis that polls will be biased as in the previous elections, even though polling firms seem to have improved their methods.
- [Trump overtakes Biden as favorite to win in November: Betfair Exchange](https://www.reuters.com/article/us-usa-elections-bets-idUSKBN25T1L6)
- [US Election: Polls defy Trump's comeback narrative but will the market react?](https://betting.betfair.com/politics/us-politics/us-election-tips-and-odds-polls-defy-trumps-comeback-narrative-but-will-the-market-react-030920-171.html)
- [Betting Markets Swing Toward Trump, Forecasting Tightening Race](https://www.forbes.com/sites/jackbrewster/2020/09/02/betting-markets-swing-toward-trump-forecasting-tightening-race/#22fafa8b6bfe)
- [Biden leads in the polls, but betters are taking a gamble on Trump](https://www.foxnews.com/politics/biden-leads-polls-betters-gamble-trump)
- [UK Bookmaker Betfair Shortens Joe Biden 2020 Odds After Bettor Wagers $67K](https://www.casino.org/news/uk-bookmaker-betfair-shortens-joe-biden-2020-odds/)
- [Avoid The Monster Trump Gamble - The Fundamental Numbers Havent Changed](http://politicalgambler.com/avoid-the-monster-trump-gamble-the-fundamental-numbers-havent-changed/)
Red Cross and Red Crescent societies have been trying out forecast based financing. The idea is to create forecasts and early warning indicators for some negative outcome, such as a flood, using weather forecasts, satellite imagery, climate models, etc, and then release funds automatically if the forecast reaches a given threshold, allowing the funds to be put to work before the disaster happens in a more automatic, fast and efficient manner. Goals and modus operandi might resonate with the Effective Altruism community:
> "In the precious window of time between a forecast and a potential disaster, FbF releases resources to take early action. Ultimately, we hope this early action will be more **effective at reducing suffering**, compared to waiting until the disaster happens and then doing only disaster response. For example, in Bangladesh, people who received a forecast-based cash transfer were less malnourished during a flood in 2017." (bold not mine)
- Here is the "what can go wrong" section of their [slick yet difficult to navigate webpage](https://www.forecast-based-financing.org/our-projects/what-can-go-wrong/), and an introductory [video](https://www.youtube.com/watch?v=FcuKUBihHVI).
[Prediction Markets' Time Has Come, but They Aren't Ready for It](https://www.coindesk.com/prediction-markets-election). Prediction markets could have been useful for predicting the spread of the pandemic (see: coronainformationmarkets.com), or for informing presidential election consequences (see: Hypermind above), but their relatively small size makes them less informative. Blockchain based prediction technologies, like Augur, Gnosis or Omen could have helped bypass US regulatory hurdles (which ban many kinds of gambling), but the recent increase in transaction fees means that "everything below a $1,000 bet is basically economically unfeasible"
Floods in India and Bangladesh:
- [Time to develop a reliable flood forecasting model (for Bangladesh)](https://www.thedailystar.net/opinion/news/time-develop-reliable-flood-forecasting-model-1952061)
> This year, flood started somewhat earlier than usual. The Brahmaputra water crossed the danger level (DL) on June 28, subsided after a week, and then crossed the DL again on July 13 and continued for 26 days. It inundated over 30 percent of the country
- [Google's AI Flood Forecasting Initiative now expanded to all parts of India](https://www.timesnownews.com/technology-science/article/googles-ai-flood-forecasting-initiative-now-expanded-to-all-parts-of-india-heres-how-it-helps/646340); [Google bolsters its A.I.-enabled flood alerts for India and Bangladesh](https://fortune.com/2020/09/01/google-ai-flood-alerts-india-bangladesh/)
> “One assumption that was presumed to be true in hydrology is that you cannot generalize across water basins,” Nevo said. “Well, its not true, as it turns out.” He said Googles A.I.-based forecasting model has performed better on watersheds it has never encountered before in training than classical hydrologic models that were designed specifically for that river basin.
[The many tribes of 2020 election worriers: An ethnographic report](https://www.washingtonpost.com/outlook/2020/09/01/many-tribes-2020-election-worriers-an-ethnographic-report/) by the Washington Post.
Electricity time series demand and supply forecasting startup [raises $8 million](https://news.crunchbase.com/news/myst-ai-closes-6m-series-a-to-forecast-energy-demand-supply/). I keep seeing this kind of announcement; doing forecasting well in an underforecasted domain seems to be somewhat profitable right now, and it's not like there is an absence of domains to which forecasting can be applied. This might be a good idea for an earning-to-give startup.
[NSF and NASA partner to address space weather research and forecasting](https://www.nsf.gov/news/special_reports/announcements/090120.01.jsp). Together, NSF and NASA are investing over $17 million into six, three-year awards, each of which contributes to key research that can expand the nation's space weather prediction capabilities.
In its monthly report, OPEC said it expects the pandemic to reduce demand by 9.5 million barrels a day, forecasting a fall in demand of 9.5% from last year, [reports the Wall Street Journal](https://www.wsj.com/articles/opec-deepens-forecast-for-decline-in-global-oil-demand-11600083622)
Some [criticism](https://www.theblockcrypto.com/post/76453/arca-gnosis-defi-project-call) of Gnosis, a decentralized prediction markets startup, by early investors who want to cash out. [Here](https://www.ar.ca/blog/understanding-arcas-request-for-change-at-gnosis) is a blog post by said early investors; they claim that "Gnosis took out what was in effect a 3+ year interest-free loan from token holders and failed to deliver the products laid out in its fundraising whitepaper, quintupled the size of its balance sheet due simply to positive price fluctuations in ETH, and then launched products that accrue value only to Gnosis management."
[What a study of video games can tell us about being better decision makers](https://qz.com/1899461/how-individuals-and-companies-can-get-better-at-making-decisions/) ($), a frustratingly well-paywalled, yet exhaustive, complete and informative overview of the IARPA's FOCUS tournament:
> To study what makes someone good at thinking about counterfactuals, the intelligence community decided to study the ability to forecast the outcomes of simulations. A simulation is a computer program that can be run again and again, under different conditions: essentially, rerunning history. In a simulated world, the researchers could know the effect a particular decision or intervention would have. They would show teams of analysts the outcome of one run of the simulation and then ask them to predict what would have happened if some key variable had been changed.
## Negative Examples
[Why Donald Trump Isnt A Real Candidate, In One Chart](https://fivethirtyeight.com/features/why-donald-trump-isnt-a-real-candidate-in-one-chart/), wrote 538 in 2015.
> For this reason alone, Trump has a better chance of cameoing in another “Home Alone” movie with Macaulay Culkin — or playing in the NBA Finals — than winning the Republican nomination.
[Travel CFOs Hesitant on Forecasts as Pandemic Fogs Outlook](https://www.airbus.com/newsroom/press-releases/en/2020/09/airbus-reveals-new-zeroemission-concept-aircraft.html), reports the Wall Street Journal.
> "We're basically prevented from saying the word 'forecast' right now because whatever we forecast...it's wrong," said Shannon Okinaka, chief financial officer at Hawaiian Airlines. "So we've started to use the word 'planning scenarios' or 'planning assumptions.'"
## Long Content
Andrew Gelman et al. release [Information, incentives, and goals in election forecasts](http://www.stat.columbia.edu/~gelman/research/unpublished/forecast_incentives3.pdf).
- Neither The Economist's model nor 538's are fully Bayesian. In particular, they are not martingales, that is, their current probability is not the expected value of their future probability.
> campaign polls are more stable than every before,and even the relatively small swings that do appear can largely be attributed to differential nonresponse
> Regarding predictions for 2020, the creator of the Fivethirtyeight forecast writes, "we think its appropriate to make fairly conservative choices *especially* when it comes to the tails of your distributions. Historically this has led 538 to well-calibrated forecasts (our 20%s really mean 20%)" (Silver, 2020b). But conservative prediction corresponds can produce a too-wide interval, one that plays it safe by including extra uncertainty. In other words, conservative forecasts should lead to underconfidence: intervals whose coverage is greater than advertised. And, indeed, according to the calibration plot shown by Boice and Wezerek (2019) of Fivethirtyeights political forecasts, in this domain 20% for them really means 14%, and 80% really means 88%.
[The Literary Digest Poll of 1936](https://en.wikipedia.org/wiki/The_Literary_Digest#Presidential_poll). A poll so bad that it destroyed the magazine.
- Compare the Literary Digest and Gallup polls of 1936 with The New York Times's [model of 2016](https://www.nytimes.com/interactive/2016/upshot/presidential-polls-forecast.html) and [538's 2016 forecast](https://projects.fivethirtyeight.com/2016-election-forecast/#plus), respectively.
> In retrospect, the polling techniques employed by the magazine were to blame. Although it had polled ten million individuals (of whom 2.27 million responded, an astronomical total for any opinion poll),[5] it had surveyed its own readers first, a group with disposable incomes well above the national average of the time (shown in part by their ability to afford a magazine subscription during the depths of the Great Depression), and those two other readily available lists, those of registered automobile owners and that of telephone users, both of which were also wealthier than the average American at the time.
> Research published in 1972 and 1988 concluded that as expected this sampling bias was a factor, but non-response bias was the primary source of the error - that is, people who disliked Roosevelt had strong feelings and were more willing to take the time to mail back a response.
> George Gallup's American Institute of Public Opinion achieved national recognition by correctly predicting the result of the 1936 election, while Gallup also correctly predicted the (quite different) results of the Literary Digest poll to within 1.1%, using a much smaller sample size of just 50,000.[5] Gallup's final poll before the election also predicted Roosevelt would receive 56% of the popular vote: the official tally gave Roosevelt 60.8%.
> This debacle led to a considerable refinement of public opinion polling techniques, and later came to be regarded as ushering in the era of modern scientific public opinion research.
[Feynman in 1985](https://infoproc.blogspot.com/2020/09/feynman-on-ai.html), answering questions about whether machines will ever be more intelligent than humans.
[Why Most Published Research Findings Are False](https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124), back from 2005. The abstract reads:
> There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
[Reference class forecasting](https://en.wikipedia.org/wiki/Reference_class_forecasting). Reference class forecasting or comparison class forecasting is a method of predicting the future by looking at similar past situations and their outcomes. The theories behind reference class forecasting were developed by Daniel Kahneman and Amos Tversky. The theoretical work helped Kahneman win the Nobel Prize in Economics.Reference class forecasting is so named as it predicts the outcome of a planned action based on actual outcomes in a reference class of similar actions to that being forecast.
[Reference class problem](https://en.wikipedia.org/wiki/Reference_class_problem)
> In statistics, the reference class problem is the problem of deciding what class to use when calculating the probability applicable to a particular case.
> For example, to estimate the probability of an aircraft crashing, we could refer to the frequency of crashes among various different sets of aircraft: all aircraft, this make of aircraft, aircraft flown by this company in the last ten years, etc. In this example, the aircraft for which we wish to calculate the probability of a crash is a member of many different classes, in which the frequency of crashes differs. It is not obvious which class we should refer to for this aircraft. In general, any case is a member of very many classes among which the frequency of the attribute of interest differs. The reference class problem discusses which class is the most appropriate to use.
- See also some thoughts on this [here](https://www.lesswrong.com/posts/iyRpsScBa6y4rduEt/model-combination-and-adjustment)
[The Base Rate Book](https://research-doc.credit-suisse.com/docView?language=ENG&format=PDF&source_id=csplusresearchcp&document_id=1065113751&serialid=Z1zrAAt3OJhElh4iwIYc9JHmliTCIARGu75f0b5s4bc%3D) by Credit Suisse.
> This book is the first comprehensive repository for base rates of corporate results. It examines sales growth, gross profitability, operating leverage, operating profit margin, earnings growth, and cash flow return on investment. It also examines stocks that have declined or risen sharply and their subsequent price performance.
> We show how to thoughtfully combine the inside and outside views.
> The analysis provides insight into the rate of regression toward the mean and the mean to which results regress.
## Hard To Categorize
[Improving decisions with market information: an experiment on corporate prediction markets](https://link.springer.com/article/10.1007/s10683-020-09654-y) ([sci-hub](https://sci-hub.se/https://link.springer.com/article/10.1007/s10683-020-09654-y); [archive link](https://web.archive.org/web/20200927114741/https://sci-hub.se/https://link.springer.com/article/10.1007/s10683-020-09654-y))
> We conduct a lab experiment to investigate an important corporate prediction market setting: A manager needs information about the state of a project, which workers have, in order to make a state-dependent decision. Workers can potentially reveal this information by trading in a corporate prediction market. We test two different market designs to determine which provides more information to the manager and leads to better decisions. We also investigate the effect of top-down advice from the market designer to participants on how the prediction market is intended to function. Our results show that the theoretically superior market design performs worse in the lab—in terms of manager decisions—without top-down advice. With advice, manager decisions improve and both market designs perform similarly well, although the theoretically superior market design features less mis-pricing. We provide a behavioral explanation for the failure of the theoretical predictions and discuss implications for corporate prediction markets in the field.
The nonprofit Ought organized a [forecasting thread on existential risk](https://www.lesswrong.com/posts/6x9rJbx9bmGsxXWEj/forecasting-thread-existential-risk-1), where participants display and discuss their probability distributions for existential risk, and outline some [reflections on a previous forecasting thread on AI timelines](https://www.lesswrong.com/posts/6LJjzTo5xEBui8PqE/reflections-on-ai-timelines-forecasting-thread).
A [draft report on AI timelines](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines), [summarized in the comments](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines?commentId=7d4q79ntst6ryaxWD)
Gregory Lewis has a series of posts related to forecasting and uncertainty:
- [Use resilience, instead of imprecision, to communicate uncertainty](https://forum.effectivealtruism.org/posts/m65R6pAAvd99BNEZL/use-resilience-instead-of-imprecision-to-communicate)
- [Challenges in evaluating forecaster performance](https://forum.effectivealtruism.org/posts/JsTpuMecjtaG5KHbb/challenges-in-evaluating-forecaster-performance)
- [Take care with notation for uncertain quantities](https://forum.effectivealtruism.org/posts/E3CjL7SEuq958MDR4/take-care-with-notation-for-uncertain-quantities)
[Estimation of probabilities to get tenure track in academia: baseline and publications during the PhD](https://forum.effectivealtruism.org/posts/3TQTec6FKcMSRBT2T/estimation-of-probabilities-to-get-tenure-track-in-academia).
[How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs](https://forum.effectivealtruism.org/posts/znaZXBY59Ln9SLrne/how-to-think-about-an-uncertain-future-lessons-from-other). The central thesis is:
> Expected value calculations, the favoured approach for EA decision making, are all well and good for comparing evidence backed global health charities, but they are often the wrong tool for dealing with situations of high uncertainty, the domain of EA longtermism.
Discussion by a PredictIt bettor on [how he made money by following Nate Silver's predictions](https://www.reddit.com/r/TheMotte/comments/i6yuis/culture_war_roundup_for_the_week_of_august_10_2020/g1ab8qh/?context=3&sort=best), from r/TheMotte.
Also on r/TheMotte, on [the promises and deficiencies of prediction markets](https://www.reddit.com/r/TheMotte/comments/iseo9j/culture_war_roundup_for_the_week_of_september_14/g59ydcx/?context=3):
> Prediction markets will never be able to predict the unpredictable. Their promise is to be better than all of the available alternatives, by incorporating all available information sources, weighted by experts who are motivated by financial returns.
> So, you'll never have a perfect prediction of who will win the presidential election, but a good prediction market could provide the best possible guess of who will win the presidential election.
> To reach that potential, you'd need to clear away the red tape. It would need to be legal to make bets on the market, fees for making transaction need to be low, participants would need faith in the bet adjudication process, and there can't be limits to the amount you can bet. Signs that you'd succeeded would include sophisticated investors making large bets with a narrow bid/ask spread.
> Unfortunately prediction markets are nowhere close to that ideal today; they're at most "barely legal," bet sizes are limited, transaction fees are high, getting money in or out is clumsy and sketchy, trading volumes are pretty low, and you don't see any hedge funds with "prediction market" desks or strategies. As a result, I put very little stock in political prediction markets today. At best they're populated by dumb money, and at worst they're actively manipulated by campaigns or partisans who are not motivated by direct financial returns.
[Nate Silver](https://twitter.com/NateSilver538/status/1300449268633866241) on a small twitter thread on prediction markets: "Most of what makes political prediction markets dumb is that people assume they have expertise about election forecasting because they a) follow politics and b) understand "data" and "markets". Without more specific domain knowledge, though, that combo is a recipe for stupidity."
- Interestingly, I've recently found out that 538's political predictions are probably [underconfident](https://projects.fivethirtyeight.com/checking-our-work/), i.e., an 80% happens 88% of the time.
[Deloitte](https://www2.deloitte.com/us/en/pages/about-deloitte/articles/press-releases/a-tale-of-two-holiday-seasons-as-a-k-shaped-recovery-model-emerges-consumer-spending-heavily-bifurcated.html) forecasts US holiday season retail sales (but doesn't provide confidence intervals.)
[Solar forecast](https://www.nytimes.com/2020/09/15/science/sun-solar-cycle.html). Sun to leave the quietest part of its cycle, but still remain relatively quiet and not produce world-ending coronal mass ejections, the New York Times reports.
The Foresight Insitute organizes weekly talks; here is one with Samo Burja on [long-lived institutions](https://www.youtube.com/watch?v=6cCcX0xydmk).
[Some examples of failed technology predictions](https://eandt.theiet.org/content/articles/2020/09/the-eccentric-engineer-the-perils-of-forecasting/).
Last, but not least, Ozzie Gooen on [Multivariate estimation & the Squiggly language](https://www.lesswrong.com/posts/kTzADPE26xh3dyTEu/multivariate-estimation-and-the-squiggly-language):
![](https://lh4.googleusercontent.com/axqy1MImst0AL-JXV3X7NJd9LFCwZljG05zBD7bQAyBppSrBacchtUXB3zvrtC3xwmqpsUPLznXP4Yfwg_uZOmTuaQ6HrcElhN1_ZgNqOHP2UvGbBAw6kDGb0qZPE1mcnAS39aFT)
***
Note to the future: All links are added automatically to the Internet Archive. In case of link rot, go [there](https://archive.org/) and input the dead link.
***
> [Littlewood's law](https://en.wikipedia.org/wiki/Littlewood%27s_law) states that a person can expect to experience events with odds of one in a million (defined by the law as a "miracle") at the rate of about one per month."
***

View File

@ -14,6 +14,7 @@
## Recent. ## Recent.
[Labor, Capital, and the Optimal Growth of Social Movements](https://nunosempere.github.io/ea/MovementBuildingForUtilityMaximizers.pdf) (en) [Labor, Capital, and the Optimal Growth of Social Movements](https://nunosempere.github.io/ea/MovementBuildingForUtilityMaximizers.pdf) (en)
[Forecasting Newsletter: September 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/September2020)
[Forecasting Newsletter: August 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/August2020) (en) [Forecasting Newsletter: August 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/August2020) (en)
[Forecasting Newsletter: July 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/July2020) (en) [Forecasting Newsletter: July 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/July2020) (en)
[Forecasting Newsletter: June 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/June2020) (en) [Forecasting Newsletter: June 2020](https://nunosempere.github.io/ea/ForecastingNewsletter/June2020) (en)