Forecasting newsletter draft

This commit is contained in:
Nuno Sempere 2020-05-23 20:23:06 +02:00
parent 1aafe7f8ce
commit a83bb5fd1e

View File

@ -132,11 +132,11 @@ The Center for Security and Emerging Technology is looking for forecasters to pr
## Grab bag
- [SlateStarCodex](https://slatestarcodex.com/2020/04/29/predictions-for-2020/) brings us a hundred more predictions for 2020. Some analysis by Zvi Mowshowitz [here](https://www.lesswrong.com/posts/gSdZjyFSky3d34ySh/slatestarcodex-2020-predictions-buy-sell-hold) and by [Bucky](https://www.lesswrong.com/posts/orSNNCm77LiSEBovx/2020-predictions).
- [FLI Podcast: On Superforecasting with Robert de Neufville](https://futureoflife.org/2020/04/30/on-superforecasting-with-robert-de-neufville/). Leaning towards introductory, broad and superficial; I would have liked to see a more intense drilling on some of the points. It still gives pointers to interesting stuff, though, chiefly [The NonProphets Podcast](https://nonprophetspod.wordpress.com/), which looks like it has some more in-depth stuff. Some quotes:
> So its not clear to me that our forecasts are necessarily affecting policy. Although its the kind of thing that gets written up in the news and who knows how much that affects peoples opinions, or they talk about it at Davos and maybe those people go back and they change what theyre doing.
> So its not clear to me that our forecasts are necessarily affecting policy. Although its the kind of thing that gets written up in the news and who knows how much that affects peoples opinions, or they talk about it at Davos and maybe those people go back and they change what theyre doing.
> I wish it were used better. If I were the advisor to a president, I would say you should create a predictive intelligence unit using superforecasters. Maybe give them access to some classified information, but even using open source information, have them predict probabilities of certain kinds of things and then develop a system for using that in your decision making. But I think were a fair ways away from that. I dont know any interest in that in the current administration.
> I wish it were used better. If I were the advisor to a president, I would say you should create a predictive intelligence unit using superforecasters. Maybe give them access to some classified information, but even using open source information, have them predict probabilities of certain kinds of things and then develop a system for using that in your decision making. But I think were a fair ways away from that. I dont know any interest in that in the current administration.
> Now one thing I think is interesting is that often people, theyre not interested in my saying, “Theres a 78% chance of something happening.” What they want to know is, how did I get there? What is my arguments? Thats not unreasonable. I really like thinking in terms of probabilities, but I think it often helps people understand what the mechanism is because it tells them something about the world that might help them make a decision. So I think one thing that maybe can be done is not to treat it as a black box probability, but to have some kind of algorithmic transparency about our thinking because that actually helps people, might be more useful in terms of making decisions than just a number.
> Now one thing I think is interesting is that often people, theyre not interested in my saying, “Theres a 78% chance of something happening.” What they want to know is, how did I get there? What is my arguments? Thats not unreasonable. I really like thinking in terms of probabilities, but I think it often helps people understand what the mechanism is because it tells them something about the world that might help them make a decision. So I think one thing that maybe can be done is not to treat it as a black box probability, but to have some kind of algorithmic transparency about our thinking because that actually helps people, might be more useful in terms of making decisions than just a number.
- [Forecasting s-curves is hard](https://constancecrozier.com/2020/04/16/forecasting-s-curves-is-hard/): Some sweet visualizations of what it says on the title.
- [Fashion Trend Forecasting](https://arxiv.org/pdf/2005.03297.pdf) using Instagram and baking preexisting knowledge into NNs.
@ -163,13 +163,13 @@ The Center for Security and Emerging Technology is looking for forecasters to pr
- [How to evaluate 50% predictions](https://www.lesswrong.com/posts/DAc4iuy4D3EiNBt9B/how-to-evaluate-50-predictions). "I commonly hear (sometimes from very smart people) that 50% predictions are meaningless. I think that this is wrong."
- [Named Distributions as Artifacts](https://blog.cerebralab.com/Named%20Distributions%20as%20Artifacts). On how the named distributions we use (the normal distribution, etc.), were selected for being easy to use in pre-computer eras, rather than on being a good ur-prior on distributions for phenomena in this universe.
- [The fallacy of placing confidence in confidence intervals](https://link.springer.com/article/10.3758/s13423-015-0947-8). On how the folk interpretation of confidence intervals can be misguided, as it conflates: a. the long-run probability, before seeing some data, that a procedure will produce an interval which contains the true value, and b. and the probability that a particular interval contains the true value, after seeing the data. This is in contrast to Bayesian theory, which can use the information in the data to determine what is reasonable to believe, in light of the model assumptions and prior information. I found their example where different confidence procedures produce 50% confidence intervals which are nested inside each other particularly funny. Some quotes:
> Using the theory of confidence intervals and the support of two examples, we have shown that CIs do not have the properties that are often claimed on their behalf. Confidence interval theory was developed to solve a very constrained problem: how can one construct a procedure that produces intervals containing the true parameter a fixed proportion of the time? Claims that confidence intervals yield an index of precision, that the values within them are plausible, and that the confidence coefficient can be read as a measure of certainty that the interval contains the true value, are all fallacies and unjustified by confidence interval theory.
> Using the theory of confidence intervals and the support of two examples, we have shown that CIs do not have the properties that are often claimed on their behalf. Confidence interval theory was developed to solve a very constrained problem: how can one construct a procedure that produces intervals containing the true parameter a fixed proportion of the time? Claims that confidence intervals yield an index of precision, that the values within them are plausible, and that the confidence coefficient can be read as a measure of certainty that the interval contains the true value, are all fallacies and unjustified by confidence interval theory.
> “I am not at all sure that the confidence is not a confidence trick. Does it really lead us towards what we need the chance that in the universe which we are sampling the parameter is within these certain limits? I think it does not. I think we are in the position of knowing that either an improbable event has occurred or the parameter in the population is within the limits. To balance these things we must make an estimate and form a judgment as to the likelihood of the parameter in the universe that is, a prior probability the very thing that is supposed to be eliminated.”
> “I am not at all sure that the confidence is not a confidence trick. Does it really lead us towards what we need the chance that in the universe which we are sampling the parameter is within these certain limits? I think it does not. I think we are in the position of knowing that either an improbable event has occurred or the parameter in the population is within the limits. To balance these things we must make an estimate and form a judgment as to the likelihood of the parameter in the universe that is, a prior probability the very thing that is supposed to be eliminated.”
> The existence of multiple, contradictory long-run probabilities brings back into focus the confusion between what we know before the experiment with what we know after the experiment. For any of these confidence procedures, we know before the experiment that 50 % of future CIs will contain the true value. After observing the results, conditioning on a known property of the data — such as, in this case, the variance of the bubbles — can radically alter our assessment of the probability.
> The existence of multiple, contradictory long-run probabilities brings back into focus the confusion between what we know before the experiment with what we know after the experiment. For any of these confidence procedures, we know before the experiment that 50 % of future CIs will contain the true value. After observing the results, conditioning on a known property of the data — such as, in this case, the variance of the bubbles — can radically alter our assessment of the probability.
> “You keep using that word. I do not think it means what you think it means.” Íñigo Montoya, The Princess Bride (1987)
> “You keep using that word. I do not think it means what you think it means.” Íñigo Montoya, The Princess Bride (1987)
- [Psychology of Intelligence Analysis](https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/), courtesy of the American Central Intelligence Agency, seemed interesting, and I read chapters 4, 5 and 14. Sometimes forecasting looks like reinventing intelligence analysis; from that perspective, I've found this reference work useful. Thanks to EA Discord user @Willow for bringing this work to my attention.
- Chapter 4: Strategies for Analytical Judgement. Discusses and compares the strengths and weaknesses of four tactics: situational analysis (inside view), applying theory, comparison with historical situations, and immersing oneself on the data. It then brings up several suboptimal tactics for choosing among hypothesis.
- Chapter 5: When does one need more information, and in what shapes does new information come from?
@ -193,12 +193,11 @@ The Center for Security and Emerging Technology is looking for forecasters to pr
- [The Backwards Arrow of Time of the Coherently Bayesian Statistical Mechanic](https://arxiv.org/abs/cond-mat/0410063): Identifying thermodinamic entropy with the Bayesian uncertainty of an ideal observer leads to a contradiction, because as the observer observes more about the system, they update on this information, which reduces uncertainty, and thus entropy.
- This might be interesting to students in the tradition of E.T. Jaynes: for example, the paper directly conflicts with this LessWrong post: [The Second Law of Thermodynamics, and Engines of Cognition](https://www.lesswrong.com/posts/QkX2bAkwG2EpGvNug/the-second-law-of-thermodynamics-and-engines-of-cognition), part of *Rationality, From AI to Zombies*. The way out might be to postulate that actually, the Bayesian updating process itself would increase entropy, in the form of e.g., the work needed to update bits on a computer. Any applications to Christian lore are left as an excercise for the reader. Otherwise, seeing two bright people being cogently convinced of different perspectives does something funny to my probabilities: it pushes them towards 50%, but also increases the expected time I'd have to spend on the topic to move them away from 50%.
- [Behavioral Problems of Adhering to a Decision Policy](https://pdfs.semanticscholar.org/7a79/28d5f133e4a274dcaec4d0a207daecde8068.pdf)
> Our judges in this study were eight individuals, carefully selected for their expertise as
> Our judges in this study were eight individuals, carefully selected for their expertise as
handicappers. Each judge was presented with a list of 88 variables culled from the past performance charts. He was asked to indicate which five variables out of the 88 he would wish to use when handicapping a race, if all he could have was five variables. He was then asked to indicate which 10, which 20, and which 40 he would use if 10, 20, or 40 were available to him.
> We see that accuracy was as good with five variables as it was with 10, 20, or 40. The flat curve is an average over eight subjects and is somewhat misleading. Three of the eight actually showed a decrease in accuracy with more information, two improved, and three stayed about the same. All of the handicappers became more confident in their judgments as information increased.
The study contains other nuggets, such as:
> We see that accuracy was as good with five variables as it was with 10, 20, or 40. The flat curve is an average over eight subjects and is somewhat misleading. Three of the eight actually showed a decrease in accuracy with more information, two improved, and three stayed about the same. All of the handicappers became more confident in their judgments as information increased.
The study contains other nuggets, such as:
- An experiment on trying to predict the outcome of a given equation. When the feedback has a margin of error, this confuses respondents.
- "However, the results indicated that subjects often chose one gamble, yet stated a higher selling price for the other gamble"
- "We figured that a comparison between two students along the same dimension should be easier, cognitively, than a 13 comparison between different dimensions, and this ease of use should lead to greater reliance on the common dimension. The data strongly confirmed this hypothesis. Dimensions were weighted more heavily when common than when they were unique attributes. Interrogation of the subjects after the experiment indicated that most did not wish to change their policies by giving more weight to common dimensions and they were unaware that they had done so."
@ -208,4 +207,6 @@ The study contains other nuggets, such as:
As remedies they suggest to create a model by elliciting the expert, either by having the expert make a large number of judgements and distillating a model, or by asking the expert what they think the most important factors are. A third alternative suggested is computer assistance, so that the experiment participants become aware of which factors influence their judgment.
- [Immanuel Kant, on Betting](https://www.econlib.org/archives/2014/07/kant_on_betting.html)
Conflicts of interest: Marked as (c.o.i) throughout the text.
Note to the future: All links are added automatically to the Internet Archive. In case of link rot, go [here](https://archive.org/)