Compare commits

...

26 Commits

Author SHA1 Message Date
Nuno Sempere 98e2da71a4 tweak: save progress.
1 year ago
Nuno Sempere 468426f116 fix: add fraud estimates
1 year ago
NunoSempere 87997ee362 feat: change graph order to accord to cummulative spending.
1 year ago
NunoSempere 62168b7cb8 tweak: Order by cummulative area amount.
1 year ago
NunoSempere eebca66403 tweak: add longtermist funding.
1 year ago
Nuño Sempere 9e70219779
Merge pull request #2 from NunoSempere/local
1 year ago
Nuno Sempere b9b59c1ee9 feat: track local changes.
1 year ago
Nuño Sempere 7480cd1353
Merge pull request #1 from LeopoldTal/master
1 year ago
Leopold Tal G 29b2c8d66d tweak: add alt-text to plots
1 year ago
Leopold Tal G 3f1cee9c36 tweak: automatically plot clean-labels version
1 year ago
Leopold Tal G 7d9bd15f86 tweak: show categories in consistent order
1 year ago
Nuno Sempere c90d89b298 feat: add content
2 years ago
Nuno Sempere abe2c8d857 tweak: add twitter previews.
2 years ago
Nuno Sempere 7ccdaadc60 feat: add article
2 years ago
Nuno Sempere c9043736d0 tweak: fix typo, fix subscribe css
2 years ago
Nuno Sempere 183867cc34 tweak: add gossip title.
2 years ago
Nuno Sempere a863dc101b feat: tweak headers, add content
2 years ago
Nuno Sempere 2aaf24896e feat: add social cards.
2 years ago
Nuno Sempere 5d6036be6b tweak: change img src for social card.
2 years ago
Nuno Sempere cdfac9b871 feat: ignore sitemap
2 years ago
Nuno Sempere 15798fe42f feat: add content
2 years ago
Nuno Sempere de6cddce79 feat: add subscribe page.
2 years ago
Nuno Sempere 4f7d147fe4 feat: update content
2 years ago
Nuno Sempere 61ff450abb feat: add progress
2 years ago
Nuno Sempere 15bfd62b9b feat: save progress.
2 years ago
Nuno Sempere 5d26d639c5 tweak: images
2 years ago

3
.gitignore vendored

@ -0,0 +1,3 @@
.secret/
sitemap.gz
sitemap.txt

@ -0,0 +1,45 @@
// Helpers
ss(arr) = SampleSet.fromList(arr)
// Nuclear ukraine
rusiaUsesNuclearWeaponsInUkraine = ss([0.27, 0.04, 0.02, 0.001, 0.09, 0.08, 0.07])// <- fill-in
// Note that the period of time is left unspecified
// Nuclear NATO
escalationOutsideUkraineGivenUkraineWasNuked = ss([0.15, 0.09, 0.0013, 10^(-5), 0.01, 0.3, 0.05])// <- fill-in
escalationToNATOUnconditional = rusiaUsesNuclearWeaponsInUkraine *
escalationOutsideUkraineGivenUkraineWasNuked
// Nuclear NATO to nuclear London/Washington
bigUKUSCityNukedGivenEscalationOutsideUkraine = ss([0.4, 0.15, 0.9985, 0.05, 0.02, 0.002, 0.5])// <- fill-in
bigUKUSCityUnconditional = escalationToNATOUnconditional *
bigUKUSCityNukedGivenEscalationOutsideUkraine
// Impact in lost hours
remainlingLifeExpectancyInYears = 40 to 60 // <- change
daysInYear= 365
productiveHoursInDay = 6 to 18 // <- change
ableToEscapeBefore = 0.5// <- fill-in
proportionOfPeopleInLondonWhoDie = 0.7
expectedLostHours = bigUKUSCityUnconditional *
(1 - ableToEscapeBefore) *
proportionOfPeopleInLondonWhoDie *
remainlingLifeExpectancyInYears *
daysInYear *
productiveHoursInDay
// Probably good to also estimate idiosyncratic factors such as
// - Increased or decreased productivity in a city
// - Increased or decreased impact in a city
// - Value assigned to surviving in a world after a nuclear winter
// - ...
// Display
{
rusiaUsesNuclearWeaponsInUkraine: rusiaUsesNuclearWeaponsInUkraine,
escalationToNATOUnconditional: escalationToNATOUnconditional,
bigUKUSCityUnconditional: bigUKUSCityUnconditional,
expectedLostHours: expectedLostHours
}

@ -0,0 +1,15 @@
<form method="post" action="https://listmonk.nunosempere.com/subscription/form" class="listmonk-form">
<div>
<h3>Subscribe</h3>
<input type="hidden" name="nonce" />
<p><input type="email" name="email" required placeholder="E-mail" class="subscribe-input"/></p>
<p><input type="text" name="name" placeholder="Name (optional)" class="subscribe-input"/></ap>
<p>
<input id="82ff8" type="checkbox" name="l" checked value="82ff889c-f9d9-4a45-bf9a-7e2696813021" />
<label for="82ff8" style="font-size: 18px">nunosempere.com</label>
</p>
<p><input type="submit" value="Subscribe" class="subscribe-button"/></p>
</div>
</form>

@ -12,6 +12,13 @@
<meta charset="UTF-8">
% # Legacy charset declaration for backards compatibility with non-html5 browsers.
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta property="og:image" content="https://cards.nunosempere.com/api/dynamic-image?endpoint=%($req_path%)">
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content="Measure is unceasing" />
<meta name="twitter:description" content="%($pageTitle%)" />
<meta name="twitter:url" content="https://nunosempere.com/" />
<meta name="twitter:image" content="https://cards.nunosempere.com/api/dynamic-image?endpoint=%($req_path%)" />
<meta name="twitter:site" content="@NunoSempere" />
% if(! ~ $#meta_description 0)
% echo ' <meta name="description" content="'$"meta_description'">'
@ -23,6 +30,11 @@
% cat $h
%($"extraHeaders%)
<script data-isso="//comments.nunosempere.com/" src="//comments.nunosempere.com/js/embed.min.js"></script>
% # To add math
% # <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
% # <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
</head>
<body>

@ -121,7 +121,7 @@ So for example, researcher #4 is saying that the first grant, to research on the
### Elicitation method #4: Discussion and new individual estimates
After holding a discussion round for an hour, participants estimates shifted to the following[\[1\]](#fnpmfo0q7i4di):
After holding a discussion round for an hour, participants estimates shifted to the following[^1]:
![](https://i.imgur.com/xleSkdf.png)
@ -146,11 +146,11 @@ In the table above, for example, the first light red “FALSE” square under
### Estimates between participants after holding a discussion round were mostly in agreement
The final estimates made by the participants after discussion were fairly concordant[\[2\]](#fnqbjzronh3oi):
The final estimates made by the participants after discussion were fairly concordant[^2]:
![](https://i.imgur.com/xleSkdf.png)
For instance, if we look at the first row, the 90% confidence intervals[\[3\]](#fnacizl98aof) of the normalized estimates are 0.1 to 1000, 48 to 90, -16 to 54, 41 to 124, 23 to 233, and 20 to 180. These all overlap! If we visualize these 90% confidence intervals as lognormals or loguniforms, they would look as follows[\[4\]](#fnclvpudp11e): 
For instance, if we look at the first row, the 90% confidence intervals[^3] the normalized estimates are 0.1 to 1000, 48 to 90, -16 to 54, 41 to 124, 23 to 233, and 20 to 180. These all overlap! If we visualize these 90% confidence intervals as lognormals or loguniforms, they would look as follows[^4]: 
![](https://i.imgur.com/LNqcXxv.png)
@ -191,7 +191,7 @@ participant_hours + organizer_hours
So for 9 grants, this is 2.6 to 4.9 hours per grant. Perhaps continued investment could bring this down to one hour per grant. I also think that time might scale roughly linearly with the number of grants, because grants can be divided into buckets, and then we can apply the relative value method to each bucket. Then we can compare buckets at a small additional cost—e.g., by comparing the best grants from each bucket.
Im not actually sure how many grants the EA ecosystem has, but Im guessing something like 300 to 1000 grants per year[\[5\]](#fn6fjejaxnj27). Given this, it would take half to two FTEs (full-time equivalents) to evaluate all grants, which was lower than I suspected:
Im not actually sure how many grants the EA ecosystem has, but Im guessing something like 300 to 1000 grants per year[^5]. Given this, it would take half to two FTEs (full-time equivalents) to evaluate all grants, which was lower than I suspected:
```
hours_per_participant = 2 to 5
@ -314,24 +314,12 @@ Note that there are various methodological inelegancies:
In part because the initial estimates were not congruent, I procrastinated in hosting the discussion session, which was held around a month after the initial experiment, if I recall correctly. If I were redoing the experiment, I would hold the different parts of this experiment closer together.
1. **[^](#fnrefpmfo0q7i4di)**
Note that in the first case, I am displaying the mean, and in the other, the medians. This is because a) means of very wide distributions are fairly counterintuitive, and in various occasions, I don't think that participants thought much about this, and b) because of a methodological accident, participants provided means in the first case and medians in the second.
Note also that medians are a pretty terrible aggregation method.
2. **[^](#fnrefqbjzronh3oi)**
Note that the distributions aren't necessarily lognormally distributed, hence why the medians may look off. See [this spreadsheet](https://docs.google.com/spreadsheets/d/13inKETvESvcOu8UX2uyM7nlUvUNbECEugt3ec_YqnoY/edit?usp=sharing) for details.
3. **[^](#fnrefacizl98aof)**
80% for researcher #5, because of idiosyncratic reasons.
4. **[^](#fnrefclvpudp11e)**
[^1]: Note that in the first case, I am displaying the mean, and in the other, the medians. This is because a) means of very wide distributions are fairly counterintuitive, and in various occasions, I don't think that participants thought much about this, and b) because of a methodological accident, participants provided means in the first case and medians in the second. Note also that medians are a pretty terrible aggregation method.
[^2]: Note that the distributions aren't necessarily lognormally distributed, hence why the medians may look off. See [this spreadsheet](https://docs.google.com/spreadsheets/d/13inKETvESvcOu8UX2uyM7nlUvUNbECEugt3ec_YqnoY/edit?usp=sharing) for details.
Squiggle model [here](https://www.squiggle-language.com/playground/#code=eNqdkMFOwzAQRH9l5VMiBZQ4BRVLHPmCHDGKAnWTFYkNa5sWRfl34gJqi5Dcdk6r8WqfZ0ZmO7Op%2FDA09MmEI6%2BynfWwQmfo10GNDpu%2BevfYtr2qHKFumWArtPP47B0abWteO1Nb3MI9jFLDrKN3AY9Sf%2FtB434M0s2gBEhGyqqGXjpF4DZGsgyO9w5PClgswRm4y%2Fc7U3YWoiOlYpCr4jZQbhaXUtbGUzRJERgFvxgyFx9j8HzHWP6p66wo%2BBHti5cBw8vy%2Fyinw4yOstT2LfEa14aGpDdtkl8XaQZhKvLXNE0PvvBz50nqKWRm0xfkbtQi).
[^3]: 80% for researcher #5, because of idiosyncratic reasons.
5. **[^](#fnref6fjejaxnj27)**
[^4]: Squiggle model [here](https://www.squiggle-language.com/playground/#code=eNqdkMFOwzAQRH9l5VMiBZQ4BRVLHPmCHDGKAnWTFYkNa5sWRfl34gJqi5Dcdk6r8WqfZ0ZmO7Op%2FDA09MmEI6%2BynfWwQmfo10GNDpu%2BevfYtr2qHKFumWArtPP47B0abWteO1Nb3MI9jFLDrKN3AY9Sf%2FtB434M0s2gBEhGyqqGXjpF4DZGsgyO9w5PClgswRm4y%2Fc7U3YWoiOlYpCr4jZQbhaXUtbGUzRJERgFvxgyFx9j8HzHWP6p66wo%2BBHti5cBw8vy%2Fyinw4yOstT2LfEa14aGpDdtkl8XaQZhKvLXNE0PvvBz50nqKWRm0xfkbtQi).
Open Philanthropy grants for 2021: 216, Long-term future fund grants for 2021: 46, FTX Future fund public grants and regrants: 113 so far, so an expected ~170 by the end of the year. In total this is 375 grants, and I'd wager it will be growing year by year.
[^5]: Open Philanthropy grants for 2021: 216, Long-term future fund grants for 2021: 46, FTX Future fund public grants and regrants: 113 so far, so an expected ~170 by the end of the year. In total this is 375 grants, and I'd wager it will be growing year by year.

@ -76,3 +76,5 @@ Looking again at the mortality rates:
that consideration probably roughly ~halves the potential adjustment.
The above post was written in response to [GiveWell's Change Our Mind Contest](https://www.givewell.org/research/change-our-mind-contest). But if you are reading this on my blog, you may want to: [Donate to GiveWell](https://secure.givewell.org/).
PS: I've continued working on this issue [here](https://forum.effectivealtruism.org/posts/BDXnNdBm6jwj6o5nc/five-slightly-more-hardcore-squiggle-models#A_sketch_of_a_more_parsimonious_estimate_of_AMF_s_impact), where I give a template Squiggle model.

@ -0,0 +1,326 @@
Samotsvety Nuclear Risk update October 2022
==============
 After recent events in Ukraine, [Samotsvety](https://samotsvety.org/) convened to update our probabilities of nuclear war. In March 2022, at the beginning of the Ukraine war, we were at ~[0.01%](https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022) that London would be hit with a nuclear weapon in the next month. Now, we are at ~0.02% for the next 1-3 months, and at 16% that Russia uses any type of nuclear weapon in Ukraine in the next year. 
Expected values are more finicky and more person-dependent than probabilities, and readers are encouraged to enter their own estimates, for which we provide a [template](https://www.squiggle-language.com/playground/#code=eNqVU9tu00AQ%2FZVRnqBqklZAhSz6ADSCiKitlIQKyS8be2yPupk1e2mwqv4740tayFV9sbyzZ86ZMzP72HOFWU3Dcqls1Yu8DXjahEYpeWPXEWLypPT0d6A81zj1ljjvRb3hEK5DolFZCPdWEWPMNjhSc4euu7lDVRp2Y563CLgESfvUh4y07hPHXLMYj%2BAL5eWDUKIlk4LJwNMSgRxozDwEdiUmlBGmcZfWiV9%2Fnt3EjC5RWnkyfBO8oxQ7xW%2F0gGv1OyV13WO6XcVL9szUfHNODEsXJKC0wI%2F6OokZ4DU17DAB3gB354nIGx4KupBueyPoBeXzH%2FPpV%2FJVw9CwjvZIblv8J33T3WH3J1B7e5V6Z268LFXigRi0cR4KE6yTHcGlYLTYmlCGoz8yVq84qcb8S5w7qef9Wd2Ki7POQlIozoUzVZVrQZfw7uJDzKU1aUi8VPK9ph7zlaok%2FaLOPv%2B4ka0WGmemrrjEL5gZu6NH2BSD6UTKbSgFsrdvzcjfnEMftqnfdl075rXhePHVZe3yVcebpt5asxDBCnIjz0ScKu2MjFBei5J3RCkZV3FiZSwJZNJ%2FIzouJAUo1zL0YSz3qJy8BGMhxfXhWVe81lNTkMjf4Rx6nvF%2F6J9KBxRFRzljU6YL9kGYOW%2BxK2N1CirzaOW0XvsVsQQ6jsFgUC9Sc7oiV%2Bq6C491I%2BDog4yOIk5booO7Hx2%2B7ij2bUi092atvblu0XaoRcb81Hv6C9nOCyo%3D). Wed guess that readers would lose 2 to 300 hours by staying in London in the next 13 months, but this estimate is at the end of a garden of forking paths, and more pessimistic or optimistic readers might make different methodological choices. We would [recommend leaving if Russia uses a tactical nuclear weapon in Ukraine](https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022#Estimating_the_value_of_leaving_London_or_other_major_cities).
Since March, we have also added our track record to [samotsvety.org/track-record](https://samotsvety.org/track-record/), which might be of use to readers when considering how much weight to give to our predictions. 
_Update 2022-10-04: Changed our estimates as a result of finding an aggregation error. You can see the previous version  of our post_ [_here_](https://web.archive.org/web/20221003195959/https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022)_. We also noticed that because of the relatively low number of estimates, they are fairly sensitive to each forecasts, so we are working on incorporating more forecasts._
_Update 2022-10-19: These estimates seem a bit out of date now; see_ [_this comment_](https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022?commentId=fYGxRsRCfzM4nWvN9#comments) _and_ [_these forecasts from the Swift Institute_](https://www.swiftcentre.org/will-russia-use-a-nuclear-weapon/)_._
## Question decomposition
We have updated our decomposition to the following:
1. What is the probability that Russia will use a nuclear weapon in Ukraine in the next MONTH?
2. Conditional on Russia using a nuclear weapon in Ukraine what is the probability that nuclear conflict will scale beyond Ukraine in the next MONTH after the initial nuclear weapon use?
3. Conditional on the nuclear conflict expanding to NATO, what is the chance that London would get hit, one MONTH after the first non-Ukraine nuclear bomb is used?
For each of those questions, we also asked forecasters for their yearly probabilities. Following up on previous feedback, we also asked forecasters for their core reasons behind their forecasts, and well present those alongside their probabilities.
We also asked a range of questions about counterfactuals:
* Conditional on Russia NOT using a nuclear weapon in Ukraine, what is the probability of a nuclear conflict outside Ukraine in the next MONTH?
* Conditional on Russia NOT using a nuclear weapon in Ukraine what is the probability that nuclear conflict will scale beyond Ukraine in the next YEAR?
* Conditional on Russia NOT dropping a nuclear weapon in Ukraine in October, what is the probability that London will be hit with a nuclear weapon in October?
As well as a sanity check:
* What is the unconditional probability of London being hit with a nuclear weapon in October?
## Summaries
### Summary tables
_For ≤ 1 month staggering times between each step_
| Event | Conditional on previous step | Unconditional probability |
|-----------------------------------------------------------------------------------------------|------------------------------|---------------------------|
| Russia uses a nuclear weapon in Ukraine in the next month | — | 5.3% |
| Nuclear conflict scales beyond Ukraine in the next month after the initial nuclear weapon use | 2.5% | 0.13% |
| London gets hit, one month after the first non-Ukraine nuclear bomb is used? | 14% | 0.02% |
_For ≤ 1 year staggering times between each step_
| Event | Conditional on previous step | Unconditional probability |
|----------------------------------------------------------------------------------------------|------------------------------|---------------------------|
| Russia uses a nuclear weapon in Ukraine in the next year | — | 16% |
| Nuclear conflict scales beyond Ukraine in the next year after the initial nuclear weapon use | 9.6% | 1.6% |
| London gets hit, one year after the first non-Ukraine nuclear bomb is used? | 23% | 0.36% |
### Visualizations
This time, we are also experimenting with providing a few visualizations. Their advantage is that they may be more intuitive; the disadvantage is that they may gloss over the shape of our uncertainty, and thus mislead. Reader beware.
For the forecast with one month between each escalation step, we have:
<img src="https://i.imgur.com/UUZ0MSx.png" class='.img-medium-center'>
<img src='https://i.imgur.com/3POz1Zr.png' class='.img-medium-center'>
<img src='https://i.imgur.com/wXigRlV.png' class='.img-medium-center'>
<img src='https://i.imgur.com/W0I3ztj.png' class='.img-medium-center'>
### A forecasters perspective
In order to understand at what level we are forecasting here, we are providing forecasters comments. One forecaster provided his comments in a more self-contained form—rather than question by question—so Im presenting those comments here, lightly edited:
> _In general, nuclear rhetoric has been used extensively before and it seems that it was fairly successful at achieving its intended goals without having to use the weapons (e.g., Germany was hesitant to send weapons to Ukraine). I think such bluffing might be wearing off but Moscow is very good at maintaining ambiguity._
>
> * _Nonetheless, previously stated “red lines” have already been crossed in this war without nuclear escalation. E.g., cross-border raids into Belgorod and strikes against Crimea._
>
> _Being ambiguous about ones willingness to use these weapons is what we have seen in the past and is what we see now. E.g., Zvi_ [_previously summarizes_](https://thezvi.substack.com/p/ukraine-post-12)_, when discussing a recent_ [_Putin's speech_](https://en.kremlin.ru/events/president/transcripts/69390)_:_
>
> \> What I heard were several instances of drawing a distinction between Russia and its territorial integrity, and the territories under occupation. He said that the call-ups would be sufficient for the operation. He declared his intention to keep the territory, if he can maintain physical control. Then, he went back to saying that Ukraine was getting weapons that could threaten Russia, explicitly including Crimea as part of Russia but not Donbass, whereas Ukraines normal forces can obviously already threaten Donbass or Kherson. He framed his threats of nuclear use in response to claimed Western nuclear blackmail and what he says are Western attempts to get Ukraine to invade clearly Russian territories.
>
> _Using nukes doesnt feel like a good choice._
>
> * _Using one on a battlefield cant be all much helpful. The frontline is ~1,000 km; troops are not concentrated. I guess the main benefits can come from “scaring troops,” “being credibly nuclear,” and maybe destroying key infrastructure._
> * _Breaking the nuclear taboo is likely to alienate parties that are ~neutral right now — most of all India. This effect is greater the more damage is done with nukes (e.g., “just testing” vs. using a very small nuke on a battlefield vs. attacking key infrastructure vs. endangering civilians)._
> * _Using nuclear weapons would also alienate various parties in Russia:_
> * _IIRC, most people disapprove of the use of nuclear weapons._
> * _Likewise, elites might be legitimately more scared: its one thing to be cut off from EU/US: you can still live lavishly in Russia. Its another thing to endanger yourself and your loved ones with the salient possibility of nuclear war._
> * _Even military planners, I think, would not be happy about stretching the nuclear doctrine that far._
>
> _Consider what will happen if the Ukrainian offensive continues. Russia is losing cities in Lugansk. I feel that Ukrainians are_ [_calling Putins nuclear bluff_](https://thezvi.substack.com/p/ukraine-post-12)_. And this gives Putin few good options to work with._
>
> * _It seems like the most likely option is Russia just trying to sustain the conflict by pouring more resources and will into it. But it also might just lose in the end. I think “partial mobilisation” can be seen through that lens._
> * _Maybe Putins move is just to wait until the winter, when the European energy crisis will be most acutely felt?_
> * _I think the nuclear pretext might be important for Western leadership, because they cant just make a deal with Putin right now, he is far beyond redemption. But making deals to “avoid nuclear holocaust” — while also giving citizens cheaper gas — might be manageable._   
>
> _If things go nuclear:_
>
> * _I think it might be with the “least” scary nuke, because every escalation step, every credibly ambiguous situation could be turned into concessions, pauses, etc. Giving up intermediate steps is not wise._
> * _Other forecasters discussed, just “testing” or nuking a small island or just dumping it in the Black sea._
> * _I am worried about the multi-step conditional probabilities we are using here. While I think we have some ability to model the present situation, if the nuclear taboo were to be broken, we would be in unchartered land._
> * _In this case, people would still push for de-escalation and would try to avoid a RussiaNATO conflict (and especially a full-out RussiaNATO nuclear war). It's just hard to think about._
> * _(A) Because evidently previous diplomatic efforts would have failed catastrophically, and its unclear if there would be any remaining diplomatic tricks in their sleeves;_
> * _(B) we havent been at this level of tension for a while, and we just dont know how everyone would react;_
> * _(C) the situation is likely to worsen for Putin (both internally and externally), and Putin might be likely to increase risk-taking as his likelihood of attaining a “win” diminishes._ 
>
> _I feel uncomfortable about my estimation process for a few reasons:_
>
> * _We are in the territory where the “proven technique” of carefully crafting base-rates is less applicable._
> * _There is a good GJOpen “rule of thumb:” if a decision depends on one person, dont go below 5%. This is because other people are not transparent to us, we dont know their constraints and we dont know the bulk of their incentives. In this case:_
> * _It's not inconceivable that the decision to invade Ukraine in late February was misinformed (and ~unilateral). Relevant actors might be misinformed now, and they might be misinformed in surprising-to-us ways due to Putin being partly “siloed.”_  
## Forecaster probabilities and comments
See a later section for a comment on our aggregation method.
### Russia using a nuclear weapon in Ukraine
_**What is the probability that Russia will use a nuclear weapon in Ukraine in the next MONTH?**_
* Aggregate probability: 0.053025 (5.303%)
* All probabilities: 0.27, 0.04, 0.02, 0.001, 0.09, 0.08, 0.07
_**What is the probability that Russia will use a nuclear weapon in Ukraine in the next YEAR?**_
* Aggregate probability: 0.16388 (16%)
* All probabilities: 0.38, 0.11, 0.11, 0.005, 0.42, 0.2, 0.11
_**Conditional on Russia using a nuclear weapon in Ukraine in the next year, will it be a tactical nuclear weapon?**_
* Aggregate probability: 0.96356 (96%)
* All probabilities: 0.97, 0.93, 0.97, “Yes”, 0.98, 0.95, 0.8
_**Forecaster comments**_
These have been lightly edited. Reading them is probably indicative of the level at which we are thinking, which has the flavor of “we have a lot of uncertainty about this.”
> _This is a particularly dangerous time. Many of the gambles Putin has taken so far have gone badly and now he stands a real risk of losing power as the war drags on and he has nothing to show for it. Even still, for Putin, even without moral guardrails, the risks of using nuclear weapons of any kind should still outweigh the benefits if he is seeing things clearly. If things continue to deteriorate, the situation may change, but for now, it seems that although Putin has been weakened, he still has a very good chance of remaining in power if he can simply get to a stalemate in the territories he now controls. Although I've frontloaded a lot of the risk into the next month, if a nuclear weapon is going to be used, there will probably be some build-up before it is deployed with warning signs along the way. It is likely Putin will try to prepare his population, and, while declaring territories within Ukraine to be part of Russia may provide some pretense of a justification, each stage of escalation brings heightened risk. At each stage, it makes sense to escalate slowly to attempt to extract the maximum possible concessions a before taking on the increased risk of further escalation. I would expect to see nuclear tests or warning shots before seeing nuclear attacks, and for the first nuclear attack, tactical nuclear weapons would be the most logical starting point._ 
> _I think the use of nuclear weapons tactically would be a lot easier for Putin to explain to the Russian people. Perhaps strategic use could come afterwards, if he is in a desperate situation._
>
> _I think that Putin is 100% committed to conquering Ukraine. His "special military action" has largely failed so far, so he is expanding his military efforts with a "partial" mobilization. If that fails, or perhaps in combination with increased military mobilization, it looks possible to me that he could detonate a tactical nuclear weapon in the mistaken belief that it would make NATO countries back off at least from territory that Russia currently controls. In reality, I think detonating a tactical nuclear weapon would have the opposite effect, though._
 
> _\[My uncertainty is\] primarily methodological and from skewing to uncertainty. The main errors in the_ [_Superforecaster post-mortem_](https://goodjudgment.com/wp-content/uploads/2022/03/1570-Post-Mortem-v2.pdf) _for predicting invasion were overreliance on certain base rates and underestimating Putins willingness to take major risks. Im hesitant to make the same mistakes twice._
>
> _I also think Putin and Kremlin officials are less analyzable than most seem to think. I still dont have a compelling explanation for why Putin wants Ukraine so bad and why hes taken so much risk up until this point, which to me says my mental model of their decision-making isnt good enough to do much with._
> _Plausible scenarios exist where Putin uses a tactical nuke, probably to scare Ukraine, divide NATO, etc._ 
 
> _I would be higher with my first two estimates if they included an attack on a nuclear plant that could lead to a radiation disaster. This might be Putin's preferred method because he could keep a level of ambiguity as to Russia being responsible. That said, Putin's reason for using a tactical nuclear weapon might precisely be to let Ukraine and the world know how serious he is about not backing down. I think Putin wants to win the Ukraine War at pretty much any cost._
>
> _\> \[…\] I think Putin would almost definitely use a tactical nuke instead of a strategic one because it would make Ukraine and America/NATO more fearful of the situation without as high of a chance of a nuclear apocalypse (when compared to a strategic nuke being detonated in Ukraine)._
> _Putin has established a land bridge to Crimea, which is a major strategic goal for Russia. In recent speeches, he has explicitly said that Russia will use everything it has on the table to protect the newly annexed region._
> _Using nuclear weapons would drastically upend the current geopolitical order. But I don't have enough confidence to confidently reject that outcome._
### Nuclear conflict escalating beyond Ukraine after Russia uses a nuclear weapon in Ukraine
**Conditional on Russia using a nuclear weapon in Ukraine what is the probability that nuclear conflict will scale beyond Ukraine in the next MONTH after the initial nuclear weapon use?** 
* Aggregate probability: 0.0254 (2.5%)
* All probabilities: 0.15, 0.09, 0.0013, 10^(-5), 0.01, 0.3, 0.05
**Conditional on Russia using a nuclear weapon in Ukraine, what is the probability that nuclear conflict will scale beyond Ukraine in the next YEAR after the initial nuclear weapon use?**
* Aggregate probability: 0.095685 (9.6%)
* All probabilities: 0.2, 0.15, 0.0151, 10^(-5), 0.15, 0.4, 0.1
**Forecaster comments**
> I think nuclear war happening as a result of Russia using a tactical nuke in Ukraine is not extremely unlikely because the world would be in somewhat unprecedented territory, so this could make for a catastrophe as a result of miscalculations on one or both sides.
> If Russia uses a nuclear weapon, the west probably would not respond with a nuclear strike, but would probably try to use other channels which I won't speculate about publicly. Depending on the type, scale, and impact of the attack, a nuclear response is possible. If there is no Russian nuclear attack there is a minuscule chance of either a preemptive strike (based on intelligence that Russia is likely to launch a nuclear attack) or a false signal based on something that looks like an attack triggering a nuclear strike against Russia. The fact of heightened tensions makes these kinds of accidents more likely than they would otherwise be.
> I don't think Russia nuking Ukraine raises the global nuclear risk by much. I think most of the risk still comes from accidental launches due to false alarms, which I think is probably at an elevated risk currently.
> I think that the [MAD](https://en.wikipedia.org/wiki/Mutual_assured_destruction) precludes nuclear conflict scaling up. And I think that if nuclear conflict were to expand following Russia detonating a nuclear weapon in Ukraine (or elsewhere), then that would likely happen close to immediately.
> Payload and target of tactical nukes are all widely variable, if one is used Id imagine those parameters would be chosen to minimize the risk of a nuclear response. 
>
> NATO isnt currently personally involved in the war, its hard to imagine them deciding to send troops or especially to send nukes in response to a hit on a military target or a demonstration blast on Snake Island or the Black Sea.
>
> Its possible Putin miscalculates or actually wants nuclear war, but to me the most likely outcome is negotiations (for better or for worse).
> I have high confidence that nuclear weapons will not be used outside this conflict.
>
> I don't have high confidence that nuclear weapons will not be used in areas close to the strategic landscape (e.g., areas supporting either side in NATO, Belarus, inner Russia, etc.)
> No one wants it to escalate. Escalating to NATO is suicidal, just clearly a loss for Putin and folks.
>
> Also, I expect revolt of elites or something. As they would feel that this is totally suicidal, not worth it. I expect a lot of people to fear that nuclear war would mean guaranteed death or misery for their families etc. 
### London being hit with a nuclear weapon, conditional on nuclear conflict escalating beyond Ukraine
**Conditional on the nuclear conflict expanding to NATO, what is the chance that London would get hit, one MONTH after the first non-Ukraine nuclear bomb is used?** 
* Aggregate probability: 0.1424 (14%)
* All probabilities: 0.4, 0.15, 0.9985, 0.05, 0.02, 0.002, 0.5
**Conditional on the nuclear conflict expanding to NATO, what is the chance that London would get hit, one YEAR after the first non-Ukraine nuclear bomb is used?**
* Aggregate probability: 0.232015 (23%)
* All probabilities: 0.45, 0.3, 0.9985, 0.05, 0.12, 0.01, 0.5
**What is the unconditional probability of London being hit with a nuclear weapon in October?**
* Aggregate probability: 0.00066 (0.066%)
* All probabilities: 0.01, 0.00056, 0.001251, 10^-8, 0.000144, 0.0012, 0.001
**Forecasters' comments**
 
> If nuclear conflict expands outside of Ukraine, it seems quite likely that London would get hit because I think that the UK would be the second choice of a Russian nuclear attack—the first choice being America. I also think that in the case of a nuclear war, it is a likely scenario that Russia launches a general nuclear attack on most, if not all of, NATO.
> Barring accidents nd other unlikely circumstances, London will only be a target in the event of full-scale nuclear war. At each stage of escalation, prior to full-scale war, there would be attempts to take off ramps. But, it is possible, even if unlikely, that predetermined nuclear response protocols could kick in, or, in the fog of war mistakes and miscalculations could result in rapid escalation.
> If there is a nuclear exchange between NATO and Russia, London will be hit very quickly.
> If a nuclear conflict does expand to NATO, I would still hold out some hope that it doesn't turn into an all-out nuclear war. Thus, my forecast for London getting hit in the event of nuclear conflict with NATO is relatively low. And, if the nuclear conflict expanded to NATO, I'd expect that if London were to get hit, then it would happen within a month. My forecast for the unconditional chance of London getting hit in October is about 10% of my forecast for any nuclear conflict in October and is barely above my forecast conditional on Russia not dropping a nuclear weapon in Ukraine.
> Conflict likely wouldnt expand to the exchange of strategic nukes after a tactical nuke exchange. Large cities are where the leaders making decisions are. Its one thing to kill soldiers and civilians but it's another to put your own life on the line. Unlike other questions, we have a fairly strong historical track record here for mutually assured destruction during the cold war. Time has passed and tactical nukes are a key difference, but I think the core concept still applies. 
>
> London getting targeted is also a very foreseeable scenario, Id be surprised if NATOs military systems arent ready and sophisticated enough to detect and shoot down a missile or submarine. 
>
> There are also layers of complication from assassination, coups, and civil unrest. The risk to Putin feels much more personal than in other scenarios. 
> Escalation is still possible, e.g. maybe Putin just really hates the West and thats his true motivation, or maybe conflict simply keeps escalating once nukes are exchanged. But that type of dramatic escalation feels unlikely.
> Escalation beyond Ukraine doesn't help Russia achieve its strategic goals.
> hard to see intermediate escalation
## Comparison vs other sources
A few other sources which have forecasts on this are:
* Back in 2019, [Luisa Rodríguezs analysis](https://forum.effectivealtruism.org/posts/PAYa6on5gJKwAywrF/how-likely-is-a-nuclear-exchange-between-the-us-and-russia) put the chance of a US/Russia nuclear exchange at 0.38%/year (if taking the arithmetic mean of her samples), or a 0.13%/year if taking the geometric mean of odds.
* Back in March, [we gave](https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022) a 0.067%/month to a “NATO/Russia nuclear exchange killing at least one person in the next month”, and an 18% probability of London being hit with a nuclear weapon after that, for an implied 0.012% monthly probability.
* Back at the end of March, [Peter Scoblic](https://forum.effectivealtruism.org/posts/W8dpCJGkwrwn7BfLk/nuclear-expert-comment-on-samotsvety-nuclear-risk-forecast-2) gave a **heavily caveated** 5% to a “NATO/Russia nuclear exchange killing at least one person in the next month”, and a likewise heavily caveated 65% probability to London being hit with a nuclear weapon after that, for an implied 3.2% probability
* [Zvi](https://thezvi.substack.com/p/ukraine-post-8-risk-of-nuclear-war) and [Daniel Filan](https://danielfilan.com/2022/03/10/prob_smart_londoner_dies_of_russian_nuke.html) also gave their probabilities using our decomposition. 
* Metaculus has several questions on nuclear weapons, such as:
* [Will there be at least one fatality due to deliberate nuclear detonation by 2024?](https://www.metaculus.com/questions/7407/deliberate-nuclear-detonation-by-2024/) (7%)
* [Will there be an offensive nuclear detonation on a nation's capital by 2024, if an offensive nuclear detonation occurs anywhere by 2024?](https://www.metaculus.com/questions/8127/nuclear-detonation-on-a-capital-by-2024/) (20%)
* [Will the first offensive nuclear detonation by 2024 be against a battlefield target, if there's an offensive detonation by then?](https://www.metaculus.com/questions/8585/bt-as-the-first-nuclear-detonation-by-2024/) (53%)
* [Will at least one nuclear weapon be detonated in Ukraine before 2023?](https://www.metaculus.com/questions/12591/nuclear-detonation-in-ukraine-by-2023/) (7%)
* [Will a Russian nuclear weapon be detonated in the US before 2023?](https://www.metaculus.com/questions/12593/2022-russian-nuclear-detonation-in-the-us/) (<1%; note that Metaculus doesnt accept probabilities below 1%)
* [Will a non-test nuclear detonation cause at least 1 fatality before 2024?](https://www.metaculus.com/questions/7404/nuclear-detonation-fatality-by-2024/) (12%)
* [Will >2 countries offensively detonate nuclear weapons by 2024, if any offensive detonation of a country's nuclear weapon occurs by then?](https://www.metaculus.com/questions/8145/conditional-2-countries-detonate-by-2024/) 35%
* [Will >2 countries have nuclear weapons offensively detonated on or over their territories by 2024, if any country offensively detonates a nuclear weapon by then?](https://www.metaculus.com/questions/8146/conditional-2-countries-attacked-by-2024/) (49%)
* Manifold Markets also has [a few markets](https://manifold.markets/search?s=24-hour-vol&f=open&q=nuclear) on this, such as:
* [Will a nuclear weapon be launched in combat by the end of 2023?](https://manifold.markets/AndyMartin/will-a-nuclear-weapon-be-launched-i-015e44ed91f5) (7%)
* [Will Russia give a nuclear ultimatum to Ukraine and/or it's Western allies during 2022?](https://manifold.markets/Nostradamnedus/will-russia-give-a-nuclear-ultimatu) (80%)
There is internal discord within Samotsvety about the degree to which the magnitude of the difference between our current and former probabilities is indicative of a lack of accuracy. We Samotsvety updated our endline monthly probability of London being hit with a nuclear weapon by ~2 (~0.02% vs 0.067 \* 0.18 = 0.012%). The difference was higher before correcting an aggregation error, so I've moved discussion to a footnote[\[1\]](#fn2tohbl1ecsm).
In addition, a [former senior U.S. government official](https://en.wikipedia.org/wiki/Andrew_C._Weber) previously gave me a 20% probability of Russia using nuclear weapons by the end of the year, and at the time I thought that this was too high, but now think that this was a reasonable belief to have, and I regret not having deferred more to him.
## Estimating the value of leaving London or other major cities
[Here](https://www.squiggle-language.com/playground/#code=eNqVU9tu00AQ%2FZVRnqBqklZAhSz6ADSCiKitlIQKyS8be2yPupk1e2mwqv4740tayFV9sbyzZ86ZMzP72HOFWU3Dcqls1Yu8DXjahEYpeWPXEWLypPT0d6A81zj1ljjvRb3hEK5DolFZCPdWEWPMNjhSc4euu7lDVRp2Y563CLgESfvUh4y07hPHXLMYj%2BAL5eWDUKIlk4LJwNMSgRxozDwEdiUmlBGmcZfWiV9%2Fnt3EjC5RWnkyfBO8oxQ7xW%2F0gGv1OyV13WO6XcVL9szUfHNODEsXJKC0wI%2F6OokZ4DU17DAB3gB354nIGx4KupBueyPoBeXzH%2FPpV%2FJVw9CwjvZIblv8J33T3WH3J1B7e5V6Z268LFXigRi0cR4KE6yTHcGlYLTYmlCGoz8yVq84qcb8S5w7qef9Wd2Ki7POQlIozoUzVZVrQZfw7uJDzKU1aUi8VPK9ph7zlaok%2FaLOPv%2B4ka0WGmemrrjEL5gZu6NH2BSD6UTKbSgFsrdvzcjfnEMftqnfdl075rXhePHVZe3yVcebpt5asxDBCnIjz0ScKu2MjFBei5J3RCkZV3FiZSwJZNJ%2FIzouJAUo1zL0YSz3qJy8BGMhxfXhWVe81lNTkMjf4Rx6nvF%2F6J9KBxRFRzljU6YL9kGYOW%2BxK2N1CirzaOW0XvsVsQQ6jsFgUC9Sc7oiV%2Bq6C491I%2BDog4yOIk5booO7Hx2%2B7ij2bUi092atvblu0XaoRcb81Hv6C9nOCyo%3D) is a template for calculating risk, given ones probabilities (also saved [here](https://nunosempere.com/.secret/nuclear-2022-10.squiggle) and [here](https://gist.githubusercontent.com/NunoSempere/42e44c33e4be8c973b49b154e5c0b4d8/raw/1e0ae12d0b0eba7fa784f7747f5cd79d08f41c1c/nuclear-2022-10.squiggle)). 
If we input **the full range** of our forecasters probabilities together with some default values, we get [the following estimate](https://develop--squiggle-documentation.netlify.app/playground#code=eNqVVO9P2zAQ%2FVdO%2FURRW1Kg%2FKjGh22gUa0CpNKhadkkk1ySE66d2Q4sQvzvuzhpYe3aii9OfT6%2F9%2Fzurs8tm%2BmnSTGbCVO2hs4U2PGhi5icNvMIKXIk5OR3QWkqceIMqbQ1bO3twSXKHI0NlbU7wpg2nMFEzHJOQtdLjJ6NyTp%2FEqpQ8YWrIpIoDBQPRpDCUJnCkphatM3JHYpcKztS0zqDERn7R9DbP%2B5A0AsO%2Fbrv16DvP6d%2BPfHr8c82s3zoQkJSdqkh1Q7BZcLxgsCCScegE3A0QyALEhMHhbI5RpQQxktarz7eXocKbSSkcKTVdeEsxdgI%2FEKPOBd7J%2FgZDxgvRPcHbxQG%2FYMO9INfO91B2we8%2FAP%2Fc7Ci%2B5XvVlcKpirSiuvCASGZYKtxu6ECeI%2Fq%2FzwbnAbV7MdMr9UeZ2dcf6c5%2B57S6dfp5DO50iN41Is1lAtTfAlrZ05PT2qHBm%2FL6j%2BrjrxhWzZjs1m7UFnxLrGNF6NZLiIHpEBq6yDTRdXsBmecwyaMKcGLP9w2TqioHKnv7JNlOYdBZdxRAPULokyolCFjUdo66QwOjgahyo2Oi8ixkMsKeaTORcnXj6rb%2FZOl2%2BJe4q2uBOf4CRNtKkfZpyWXGDPXxr8ouUHNszhSdenuMn1O9aVj7i%2BvG%2BMxP8yz88Fah30v7fShC6sq2o2%2FW4nrtC3meaZXo16xV4yq4r5IN0bfs6wSUs1zzdYJaTW3BI%2B34MGnmLQtVWS4zBEkXE%2FNPLaIMhC2RujCiM9RWB5dbSDG%2BWbBy45UXSAg4l%2Bb79CiZ%2F7J%2FiZkgcxoKVXoZdrCPDKySuvcJ21kDCJxaHg3n7onUhxoMHq9XtWYfndONpeVC8%2BVEbD1%2F2C4NaNTA22cpeHm4wZiXR8N157MuZebcrgaqjND9dJ6%2BQs8VVWn) of how many lost hours one loses in expectation as a result of staying in London in the medium term—where, because of the way we prompted forecasters, the “medium term” can range from one to three months:
<img src='https://i.imgur.com/kDIjmEv.png' class='.img-medium-center'>
If we instead input the **forecasters aggregate**, rather than the range, we arrive at: 
<img src='https://i.imgur.com/1WEammz.png' class='.img-medium-center'>
A [mixture of both estimates](https://develop--squiggle-documentation.netlify.app/playground/#code=eNqVVWFP2zAQ%2FSunfmpRW1KgFKrxYRtoVKsAqXRoWjbJJJf0RGJntgNUiP%2B%2Bs5MW1q6t9sWJ7fN7z%2B%2FukpeGmamnSZnnQs8bQ6tLbPuli5is0osVkmRJZJPfJaVphhOrSaaNYWN%2FHy4xK1CbUBrTFFq34AwmIi84CG030Sofk7F%2BJ5Sh5ANXZZSh0FA%2BaEESQ6lLQ2Jq0NQ7dygKJc1ITqsIRmTsH0H3YNCGoBsc%2BfHAj0HPP079eOLHwc8Ws3zoQEJZ1qGaVFkEOxOWBwQWTCoGlYClHIEMZJhYKKUpMKKEMF7RevXx9jqUaCKRCUtKXpfWUIy1wC%2F0iAuxd4Kv8YDxUnSv%2F05h0DtsQy%2F41ez0W37Byz%2F0r%2F013W98t8opmMpISc4LL4iMCXYatxdKgP9R%2FY9rg1Ug6%2FmY6ZXc5%2BgZ598qjr6ndPp1OvlMdu4RPOrFBsqlKT6FlTOnpyeVQ%2F33afWPdUfesa2asd2sPXBW%2FJfY2otRXojIAknIlLEwU6Urdo05x7AJY0rw4pnLxgoZzUfyO%2FtkWM5R4Iw7DqC6QTQTMmXIWMxNFXQGh8f9UBZaxWVkWcilQx7JczHn48fudO9k5bS4z%2FBWOcEFfsJEaeco%2B7TiEmMWSvsbJTeouBdHskrd3UydU3VowPXldWM85ot5dt7Y6LCvpWYPOrCuolX7u5O4Ctthnmd6M%2BoNe80ot%2B6TdKPVPcuaQ6q4r9k6kRnFJcHtLbjxKSZl5jLSnOYIEs6nYh5TRjMQpkLowIj3URhuXaUhxsVkycuOuCoQEPHb9jO0rJm%2For%2BJrERmNJRK9DJNqR8ZWaZV7JPSWQwisah5tui6J5K8UGN0u11XmH52TqbInAsvzgjY%2BT0Y7oxoV0Bbe2m4fbuG2FRHw407C%2B7VohwC%2F4NkxHkc80e6mT8310LaMHBuHhy4j2qrwgnla%2BP1D%2BYEXqU%3D) gives a 90% confidence interval of ~2 to 300 hours lost. Personally, I would use this second estimate, but it's hard to say why: maybe because I think that taking the minimum and maximum out of each question does a good job of filtering the least accurate forecasts.
Compare with a [previous estimate](https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022#Nu_o_Sempere) back in March:
<img src='https://i.imgur.com/CP4x7Z4.png' class='.img-medium-center'>
So, the danger of staying in London has increased by ~1-10x since March. Wed guess for most people reading this post moving out of the city for 1-3 months would still cause more value in lost productivity than the updated estimates of expected lost life hours, but it might be a closer call than it was previously.
For personal purposes, we probably dont have a better decision rule than “leave major cities if any tactical nukes are dropped in Ukraine” (as this will ~10x risk).
## Miscellanea
### A sanity check
We can compare the directly elicited probability of nuclear war reaching London in October with the conditional steps multiplied directly:
The conditional steps are:
1. What is the probability that Russia will use a nuclear weapon in Ukraine in the next MONTH?  0.053025 (5.303%)
2. Conditional on Russia using a nuclear weapon in Ukraine what is the probability that nuclear conflict will scale beyond Ukraine in the next MONTH after the initial nuclear weapon use? 0.0254 (2.5%)
3. Conditional on the nuclear conflict expanding to NATO, what is the chance that London would get hit, one MONTH after the first non-Ukraine nuclear bomb is used? 0.1424 (14%)
And if we multiply these together, we get 0.053025 \* 0.0254 \* 0.1424 = .00019178930400 (0.019% ~ 0.02%), versus 0.00065 (0.066%) when elicited directly. 
I think that the conditionals multiplied directly should be higher. Because the directly elicited probability assumes a scenario where escalation happens within one month, whereas the conditionals multiplied directly would include that scenario, but also scenarios where each escalation step is more staggered.
One way to think about this difference is that a ~3x difference when eliciting unlikely, <1% events is relatively normal. Personally, I (Nuño) would give more weight to the conditionals multiplied directly.
### Counterfactual baseline risk
Forecasters also predicted on these counterfactual questions. 
* Conditional on Russia NOT using a nuclear weapon in Ukraine, what is the probability of a nuclear conflict outside Ukraine in the next MONTH? (0.036%)
* Conditional on Russia NOT using a nuclear weapon in Ukraine what is the probability that nuclear conflict will happen beyond Ukraine in the next YEAR? (0.132%)
* Conditional on Russia NOT dropping a nuclear weapon in Ukraine in October, what is the probability that London will be hit with a nuclear weapon in October? 0.006%
* All probabilities: 0.1%, 0.002%, 0.125%, 0.000001%, 0.001%, 0.01%, 0.005%.
The first two probabilities are dwarfed by the probabilities in the Russian conflict. The third probability indicates a very low baseline risk, but is also very sensitive to the individual forecasts.
### A brief note on the aggregation method
We used the geometric mean of the samples with the minimum and maximum removed to better deal with extreme outliers, as described in [our previous post](https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022#fnt1dm5d62pkl). Note that the minimum (resp. maximum) do matter. For example, in \[0.1, 1, 10, 100, 1000\], the aggregate would be (1 \* 10 \* 100) ^ (1/3)  = 10. But if we remove 0.1, that aggregate would become (10 \* 100) ^ (1/2) = 31.6. 
## Acknowledgements
This is a project by [Samotsvety](https://samotsvety.org/). Thanks to Jared Leibowich, Jonathan Mann, Tolga Bilge, belikewater, Greg Justice (@slapthepancake), Misha Yagudin and Nuño Sempere for providing updates. Thanks as well to Eli Lifland for comments and suggestions, and to Daniel Kokotajlo and Bhuvan Singla for their [probability mass app](https://daniel-kokotajlo.vercel.app/). 
1. **[^](#fnref2tohbl1ecsm)**
Dropping into the first person, I (Nuño) felt that the degree to which we updated, or at least the degree to which I personally updated, is indicative that our/my probability wasnt a [martingale](https://en.wikipedia.org/wiki/Martingale_(probability_theory)), i.e., that it didnt accurately price the likelihood of future movements. See some discussion about this [here](https://arxiv.org/pdf/1703.06351.pdf), in the context of Nassim Taleb criticizing Nate Silver. Overall, that update to me suggests we should give probabilities closer to 50%, to better adjust for future unknowns, which we maybe arent pricing in.
On the other hand, other proud Samotsvety forecasters point out that our previous forecast was only for March, even though we presented the risk in annualized units. Its also just straight-out possible that we are in the bottom 10-20% of scenarios. So overall we are not done with our post-mortem, which would also include personal updates in April &c.

@ -0,0 +1,294 @@
Five slightly more hardcore Squiggle models.
==============
Following up on [Simple estimation examples in Squiggle](https://forum.effectivealtruism.org/posts/vh3YvCKnCBp6jDDFd/simple-estimation-examples-in-squiggle), this post goes through some more complicated models in Squiggle.
## Initial setup
As well as in the [playground](https://www.squiggle-language.com/playground), Squiggle can also be used inside [VS Code](https://code.visualstudio.com/), after one installs [this extension](https://github.com/quantified-uncertainty/squiggle/tree/develop/packages/vscode-ext), following the instructions [here](https://github.com/quantified-uncertainty/squiggle/blob/develop/packages/vscode-ext/README.md). This is more convenient when working with more advanced models because models can be more quickly saved, and the overall experience is nicer.
<img src='https://i.imgur.com/ldmrmmX.png' class='.img-medium-center'>
## Models
### AI timelines at every point in time
Recently, when talking [about AI timelines](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize), people tend to give probabilities of AGI by different points in time, and about slightly different operationalizations. This makes different numbers more difficult to compare. 
But the problem with people giving probabilities about different years could be solved by asking or producing probabilities for all years. For example, we could write something like this:
```
// Own probability
_sigma(slope, top, start, t) = {
   f(t) = exp(slope*(t - start))/(1 + exp(slope*(t-start)))
   result = top * (f(t) - f(start))/f(start)
   result
}
advancedPowerSeekingAIBy(t) = {
   sigma_slope = 0.02
   max_prob = 0.6
   first_year_possible = 3
   // sigma(t) = exp(sigma_slope*(t - first_year_possible))/(1 + exp(sigma_slope*(t-first_year_possible)))
   // t < first_year_possible ? 0 :  (sigma(t) - sigma(first_year_possible))/sigma(first_year_possible)*max_prob
   sigma(t) = _sigma(sigma_slope, max_prob, first_year_possible, t)
   t < first_year_possible ? 0 : sigma(t)
}
instantaneousAPSrisk(t) = {
   epsilon = 0.01
   (advancedPowerSeekingAIBy(t) - advancedPowerSeekingAIBy(t-epsilon))/epsilon
}
xriskIfAPS(t) = {
   0.5
}
xriskThroughAps(t) = advancedPowerSeekingAIBy(t) * xriskIfAPS(t)
```
This produces the cumulative and instantaneous probability of “advanced power-seeking AI” by/at each point in time:
<img src='https://i.imgur.com/7wlIKtp.png' class='.img-medium-center'>
And then, assuming a constant 95% probability of x-risk given advanced power-seeking AGI, we can get the probability of such risk by every point in time:
<img src='https://i.imgur.com/hPuGhUE.png' class='.img-medium-center'>
Now, the fun is that the x-risk is in fact not constant. If AGI happened tomorrow wed be much less prepared than if it happens in 70 years, and a better model would incorporate that. 
For individual forecasts, rather than for models which combine different forecasts, <[forecast.elicit.org](https://forecast.elicit.org/)\> had a more intuitive interface. Some forecasts produced using that interface can be seen [here](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines). However, that interface is currently unmaintained. Open Philanthropy has also produced a number of models, generally written in Python.
### More detailed expected value estimates for potential career pathways
In the [preceding post](https://forum.effectivealtruism.org/posts/vh3YvCKnCBp6jDDFd/simple-estimation-examples-in-squiggle#Expected_value_for_a_list_of_things__complexity___2_10_), I presented some quick relative estimates for possible career pathways. Shortly after that, Benjamin Todd reached out about estimating the value of various career pathways he was considering. As a result, I created [this more complicated spreadsheet](https://docs.google.com/spreadsheets/d/1QATMTzLUdmxBqD2snhiAkH-_KvwbhGdlYaU8Ho7kjDY/edit?usp=sharing):
 
<img src='https://i.imgur.com/MT1aVtk.png' class='.img-medium-center'>
You can see a higher quality version of this image here: <[https://i.imgur.com/hvq0SeM.png](https://www.google.com/url?q=https://i.imgur.com/hvq0SeM.png&sa=D&source=docs&ust=1665398336469760&usg=AOvVaw1SMBMKFOfTRKgaOAsxR9np)\>
Instead of using relative values, each row estimates the value of a broad career option in dollars, i.e., in relation to how much EA should be prepared to pay for particular outcomes (e.g., the creation of 80,000 hours, or CSET). One interesting feature of dollars is that they are a pretty intuitive measure, but it breaks down a bit under interrogation (dollars in which year? adjusted for inflation? are we implicitly assuming that twice as many dollars are twice as good?). But as long as the ratios between estimates are meaningful, they are still useful for prioritization.
For a model in the above style which is more hardcore and more complex, see [here](https://github.com/quantified-uncertainty/squiggle-models/blob/master/bahamas/hierarchical-impact-estimate-ftx-bahamas-joel2.squiggle) (or [here](https://www.squiggle-language.com/playground#code=eNrNWW2P47YR%2Fiu8%2FRLL1sqyHV8uToOiQdPkQ4MGveZTNxfQEmUTK5E6kvLGPe9%2FzwwpyXqhdo0CQWLgvLI4MxzOPPNwyPt0p4%2Fy6X1VFFSd73ZGVSy0r75NuZGqecMFN5zm7z9W%2FHDI2XujuDjc7e6WS%2FI9y0um9IP4H1OSfE2EVAXNZ3FI4iiO41XwIHRVwMCnS861uRDF0iphM%2FwRElQKYYgmSUhONL8QeCILfHwOnh%2FEg4Ap%2FlGJxHApYJLkKKVmMxE%2BBmAyowk4CY7NRECWZNZ9Qe4JyMw7r%2BAnOLPnQhb2ZyjCEq3wjDz%2B5WtBzJEJ0k7gtMsPj%2FA9W4G1MvjQWGW5ZiSuvfueM0VVcuQJzUnO9w%2FiwAz4b5dslwRfUTaD74A8N6OKJZXS%2FMSsGGjnqWLi8iBI%2FUmkMEwYDeMgXjE9a4SCqxCMoEBBy1kjHxI3wUDqQTTR%2FMnwnJvzg6jcAzrgZBNZlFKgkV3zCj%2BlkpAxw08g3BvAj6iKkskyZzsCUBEJNRA8B4F1HL4LwlUcbrZBCBNTQ6g4E5iBGF6wvh2j6InlOzJbQyK3McZ%2BaG%2BFkFoFYRzCV187rRRFhOzIOiQwVQHBOOqRp0cJMd%2BzTKquuxtiJPkSjIdk9XZoudGimWGqqxStrFq0mVBkWcYwaEwwrYsqN7zMASlPkETGRcbyXD7pIy93sKh3aGoVba3vsBYoLkQjuUr1bVtAyOxJqscdKX6drR%2BtgTh%2BDIl73trn%2F8bRlzZoPw%2B9wxj19baxU%2Fyhr4bVgRl7oohFEACcaZ4yRXKJXlKe62hkmwuSyjynSi9tKojMCHrbF8wATRdE16X%2F3sYdYIlDUQuwsUzayDTpH4uYRsThayzwsYKFuSpoprvme9l75YDjNZF3LNya%2BLEhFAYrrUvz1vRY1hGMnbCDBl8cgb5SoC2DJWVnmKPyWBLS9qOtdKaBNLnKz%2BTID0fCNFQrQB6YRUhDdKWYBcWJqXoI4aoLc%2BzbfL7%2BfO6gL%2BcZw%2FLfA8%2Ba341OIDClVIgJmTmDkLunoyzstFyXUmu%2B900DsN8i%2BNeBLew4%2BnxYOy%2FbPtKyZEInsgI%2BVrj7QA7zc2ei2FFHjPWO1HEjc3ABWyPLYbMxLD0zqC1b%2B6u6hENyv5q7X5s4DnpVDOlfDSZpbcEyuKjNNS7%2BG%2FJuwM81WgNG%2FSIe%2BogK%2BdkizzHQ1lEJsEf9OCSSPyv%2F2KRNEJAde4mBylUrdSviPEbWNxqZgNbYImsN3owlP3U6Ix6seAipFe9gw89HELU5rnoOjsI2j4T0h9FRgjDZS7c4%2FbvxkaLQ93WM%2FRO636gq%2FyNnYICghSGGURiWDS61am%2Bwx12%2FQUZv30Fpr4M3Q3VYVbsowNQeOnQ0Bn9Set65znztSCiON7btgJ4f%2BkhYgJ0VC8ZN8Ndpy30g%2Bqh0E8bR28BPc11LVyJZdSiheR5SQq%2FAX164NvTMUiggVq961ax6%2FZo7XdWXHbP7Rbx9xbVBj%2BprUS0hWUcuvoJASmrOJ03%2BgW8DP3mt49jDNWgCJ4hehMhYMcm8mn0ITLUpY70Jdigx3JMuXjPinWhSd5BOP9U5tSZJUGObGLMJ3xCPehvxshSwEFT8EcrfHLmG3DMtPoMf9BGISQBOgD4xSrajp4V9bKvrK%2BK1ieY%2B04QXwNYJN4gp1MYg0L07tYEJ2mcvMKnQ2chrEs6Asj7KAaOWihkwAig9NE55OF3MZ0k2L%2BeYxXuXnHkd6GBubuJXLROO3PBp1EPtTaVgOzmwE89dcTa1uRo3XHtwGDYx05O%2FFvNQw2b8IGXqpueIUZq46osHspn5FTZZUwmo81X8DZr8PP5mIOTsWLN4F6JZnqElDKySsiBggaRc04NirAA%2FyRG2o8hT3c7Qxb9nu8FoEBx%2FFXdk%2B7Hxn5Nq8etix2L6KuZZrs9j2MXhX0oWRE%2BCAT9%2BaOz5IeP6OMKGxT9UkoZTV56SPeBecq2HxzugwNbCaG%2FewsZsN%2BdtfNNhYT82hUB0%2BFr5NzAben6ShuZe7a3TXvu1bXBbPf8GM9pf%2Fr8dpZ5lqsl1o9E1nlN9bi3YCddUM9uR7Edoom%2Bs5bsx8cO4kWx5%2BnWK7necJwTsTcwFZ3VFNSQUtja8L%2FIwmGvIoe7ogXLha4De2aZli5dWCAF0txIcSrCYufNf9M4HK2uPinTS7mZk92rW3pONOspO81%2FAFCdLUzyrZ2jPpJvQAyAbiguZwIUdjcbxmADHVby3Ss91SCv8ku9jPTgEL9CzBVnPy3UAh%2BJFEdyU8pQW9DBOdXMh88u3f1OsrExddNea%2FaGp2G3dEr61Fwc%2Fe1Pr9u%2BBLQ9vTNwO2IQ4R30Z%2BfiLgrC58cjruCctXZ0pH%2F1TzVF5OrbusQmxdb2%2Bde7ddDfXzuBFPRxdX14F%2FVfSoORuvGdXnagrEEyd9ryafZFg2Ml4ddxQMOAOr2ifVYLRbuhVqseCiVs0r05PIhhi3KvihjqywFZRFPkSMOtlYDEM7KKJ1qJd12Lg833tSTC3IamR4v7U%2F1lx6oa7BkZQ%2Fy%2FG37kuc4o3o8tlXyJ6KZnL5atoWy4%2F2RksyHavYSt0sm6i3et4quWbtngSS7Vc2yBMg6KRdPeprwGhlm6YbhIEtZzNze4WHC%2BXTc5chlpGgzchuYeN7jtgs%2B%2BsJPlA%2FiXyc9M1YyK%2FIooVwOzaHnTwWu%2FN3fNvDguDhg%3D%3D) in the Squiggle playground.)
### A sketch of a more parsimonious estimate of AMFs impact
The estimates in [this post](https://forum.effectivealtruism.org/posts/4Qdjkf8PatGBsBExK/adding-quantified-uncertainty-to-givewell-s-cost), and overall GiveWells [estimates](https://docs.google.com/spreadsheets/d/1tytvmV_32H8XGGRJlUzRDTKTHrdevPIYmb_uc6aLeas/edit#gid=1377543212) of the value of AMF had been bothering me because they divide the population into very coarse chunks. This is somewhat suboptimal because the first chunk ranges from 0 to 4 years, but malaria mortality [differs a fair amount between a newborn and a four-year-old](https://nunosempere.com/blog/2022/09/28/granular-AMF/).
Instead, we could express impact estimates in a more elegant functional form. Ill sketch how this would look like, but Ill stop halfway through because at some point the functional form would require more research about mortality at each age.
The core of the impact estimate is a function that takes the number of beneficiaries, the age distribution of a population, and the benefit of that intervention for someone of a given age, and outputs an estimate of impact.
In Squiggle, this would look as follows:
```
valueOfInterventionInPopulation(num_beneficiaries, population_age_distribution, benefitForPersonOfAge) = {
 age_of_beneficiaries = sampleN(population_age_distribution, num_beneficiaries)
 benefits_array = List.map(age_of_beneficiaries, {|a| benefitForPersonOfAge(a)})
 total_benefits = List.reduce(benefits_array, 0, {|acc, value| acc + value})
 total_benefits
}
```
So for example, if we [feed](https://www.squiggle-language.com/playground/#code=eNq9kk1rhDAQhv9K8KQ0LRaWHhZ66KGFhdJd8BqQWR1tIE5sTLZd1P%2FeqLWwXXePvU3m43nnI23QvOvPxFUVmGOwtsYhH13PubTazB5J0kpQyYeTZakwsUZSGayDAyiH22JDFs0ByUpNG9rp2ikY7JBcle6RsJCZBCOx4az%2BjaZQYprLxsP2bnBwNuXaF212aBpN2%2BKpxIg9slYQY0O%2BLk6BPtZAVSt8C6%2BSzzqJBuKPXpOCMXD0rFdfdFdBHS5pcdZ20C03GULUj0irLah0Bs9Ig7nLMDzV4ywekVnG2bjJjnmb3UyPJZygXpCgs2G8zH0cx4Ku7GDM8Ty28nlKFpjiV42ZBcqGyVdj7MHHLsw3HeJv4e1wFUH%2F8xGC%2Fhu08wBc) the following variables to our function:
```
num_beneficiaries = 1000
population_age_distribution = 10 to 40
life_expectancy = 40 to 60
benefitForPersonOfAge(age) = life_expectancy - age
valueOfInterventionInPopulation(num_beneficiaries, population_age_distribution, benefitForPersonOfAge)
```
Then we are saying that we are reaching 1000 people, whose age distribution looks like this:
<img src='https://i.imgur.com/GvyMyqW.png' class='.img-medium-center'>
This could use a bit more work to resemble an actual population pyramid.
and that the benefit is just the remaining life expectancy. This produces the following estimate, in person-years:
<img src='https://i.imgur.com/BSbneRi.png' class='.img-medium-center'>
But the assumptions we have used arent very realistic. We are essentially assuming that we are creating clones of people at different ages, and that they wouldnt die until the end of their natural 40 to 50-year lifespan.
To shed these unrealistic assumptions, and produce something we can use to estimate the value of the AMF, we have to:
1. Add uncertainty about the shape of the population, i.e., uncertainty about how the population pyramid looks
2. Add uncertainty about how many people a distribution reaches
3. Change the shape of the uncertainty about the benefit to more closely resemble the effects of bednet distribution
The first two are relatively easy to do.
For uncertainty about the number of beneficiaries, we could naïvely write:
```
valueWithUncertaintyAboutNumBeneficiaries(num_beneficiaries_dist, population_age_distribution, benefitForPersonOfAge) = {
 numSamples = 1000
 num_beneficiaries_samples_list = sampleN(num_beneficiaries_dist, numSamples)
 benefits_list = List.map(num_beneficiaries_samples_list, {|n| valueOfInterventionInPopulation(n, population_age_distribution, benefitForPersonOfAge)})
 result = mixture(benefits_list)
 result
}
```
However, that would be very slow, because we would be repeating an expensive calculation unnecessarily. Instead, we can do [this](https://www.squiggle-language.com/playground/#code=eNrFk9tKw0AQhl9lyVWqUVPwAIIXFRQKUoWg3gTCNp2kC5vZuAe1tH13dzc9m1bQC%2B82M7PfzM7%2FZxqosfhITFVROQmutTQQ%2BdDdiGkhlxGGTDPKkzfDypJDoiXDMrgO3ik38Fj0UYN8B9RMYB%2BfRG04decQTZUNAaFgOaOSgYpIvcpmtIRsxJSFDY0LRKSp1fdCPoFUAh%2BLXgkdckOmKRLi6kWxDbQ5RauawyA8SP42SccRF%2F1URqWkE8t6sJdOK1qHbb0iMp3RWfuQIe3MPVILTXm2BC%2BREkYmh3C7X0Rij8zziPhNzog9k%2BPmow2X4jzFFH3%2BlenxM%2BYgNWWoJ72hMHpgqtvNib8L4NfyJxUkFCDBNh7YSDeO463gixvNJn5yxpryq2HcFogTNfHiq2aU7iK48%2BTGICrjFrzhl327WVO3PbK4vrLI4UZOWZztLOYIz9YPbwSWoAx33Ip9aiM3POIoGyUL7VNsn9sSruLYGYZ0L5woB7bqt%2BUqz20dZwVk8FlDrinm7ic497lLm9tj9cYNuxdP3A%2F6T%2BYM5l83mcoE):
```
valueWithUncertaintyAboutNumBeneficiaries(num_beneficiaries_dist, population_age_distribution, benefitForPersonOfAge) = {
 referenceN = 1000
 referenceValue = valueOfInterventionInPopulation(referenceN, population_age_distribution, benefitForPersonOfAge)
 numSamples = 1001
 num_beneficiaries_samples_list = sampleN(num_beneficiaries_dist, numSamples)
 benefits_list = List.map(num_beneficiaries_samples_list, {|n| referenceValue*n/referenceN})
 result = mixture(benefits_list)
 result
}
```
That is, we are calculating the value for a beneficiary population of 1000, and then we are scaling this up. This takes about 6 seconds to compute in Squiggle.
Now, when adding uncertainty about the shape of the population, we are not going to be able to use that trick, and computation will become more expensive. In the limit, maybe I would want to have a distribution of distributions. But in the meantime, Ill just have a list of possible population shapes, and [compute the shape of uncertainty over those](https://www.squiggle-language.com/playground/#code=eNrFVN9r2zAQ%2FleEn5zOa5VRd1DoQwctFEZaMFsf5mEU5%2BwI5JMnS11DnP99ku3UTpofkD307Xz36btP95219Kq5%2FBuZomBq4V1rZSBoUnczrqVaZzhyzZmI%2Fhie5wIirTjm3rX3woSBx%2BwBNagXQM0lPuCTLI1gLvbRFMkUEDKecqY4VAEp36oJyyGZ8cqSTY1LBKTF6nupnkBVEh%2Bz2xxG5IYsYyTE4WW2SWhrFStKARP%2FIPM7JSPH2PWrEqYUW1iu7%2FbQecFKf1evgCxrVu8W6bPRqqHUUjORrInXlApmJgV%2Fs19AaEOZpgFpJlkTG5NP7ccuuhhXMcbY1J%2B5nv%2FAFJRmHPXidiqNnpji21DxewOasfyXCwoyUGAbT2xmTCndSP500mzh2Gb0LCeJcVMgztSoMb9qpYy75NaV2wWpEmGJB%2FuybzY96%2BaOdMffVuRwI%2Bcs1luDOcOL%2FuKtwQoqIxxvwV%2B1UYMdcSwDyDHv%2B%2BFGc1bCKdZ3wg%2F7v3cex4mX9QFMTT5krU9z4dhdLdMvLf0vAbmko4C0YdiHV10Y9oCwB4Q9YEx7hIvDQWwxv2PcPQsr4Cul7v0g49D9o4JnkMBrCalmmLq37pK66pWt7XnRWtO3D3527%2FAH7mGM3uofoJ96rQ%3D%3D):
```
valueWithUncertaintyAboutPopulationShape(
num_beneficiaries_dist,
population_age_distribution_list,
benefitForPersonOfAge
) = {
 benefits_list = List.map(population_age_distribution_list,
{|population_age_distribution|
valueWithUncertaintyAboutNumBeneficiaries(
num_beneficiaries_dist,
population_age_distribution,
benefitForPersonOfAge
)})
 result = mixture(benefits_list)
 result
}
population_age_distribution_list = [
to(2, 40), to(2, 50), to(2, 60),
to(5, 40), to(5, 50), to(5, 60),
to(10, 40), to(10, 50), to(10, 60)
]
```
We still have to tweak the benefits to better capture the benefits of distributing of malaria nets. One first attempt might look as follows:
```
benefitForPersonOfAge(age) = {
 result = age > 5 ? mixture(0) : {
   counterfactual_child_mortality = SampleSet.fromDist(0.01 to 0.07)
   // https://apps.who.int/gho/data/view.searo.61200?lang=en
   child_mortality_after_intervention = counterfactual_child_mortality/2
   chance_live_before = (1-(counterfactual_child_mortality))^(5-age)
   chance_live_after = (1-(child_mortality_after_intervention))^(5-age)
   value = (chance_live_after - chance_live_before) * (life_expectancy - age)
   value
 }
 result
}
```
That is, we are modelling this example intervention of halving child mortality, and for child mortality to be pretty high. The final result looks like [this](https://www.squiggle-language.com/playground/#code=eNrFVFFvmzAQ%2FiunPEFHwDRJK0Xqqk7bpEpTWyna9rBsyCUHWAKbGTttleS%2FzwZSkjRNpO6hTxx35%2B8%2B33e%2BRa%2FKxMNEFwWVT72xkhq92vVlxpSQaw%2FjTDGaT%2F5qlqY5TpRkPO2Ne3Oaa7xNrrlCOUeumODX%2FE6UOqfWdrguonvkmLCYUcmw8qB8jkY0xWjGKgN2r63DgyZXfRXyDmUl%2BG1ylaILF7CYcgCbL5JtQBOraFHmeOMcRH7BxLWIbb0qolLSJ4P1zRzyC1o6%2B2p5sFjS5X6SDnVXNaQSiubRGngNKXGmY3S263lAasg49qDu5BKMDR%2Ban31wU76a8imv4z%2BZyr7zGKWijKunq3uh1Y0uPm0yfilA3Zb%2FUkFighJN4RvjCQkhW84flpoJHJuMDuVNZGwXwIo6qcWvGiph69y5cjMgVZQb4I15ea03Her2jLTHn0fkcCGrLF%2FuNOaEB93FG4ElVjq3uAV7VFpuzIhF2Ug5pn3X3ElGS3yL9C3xw%2Fq%2F2o%2FjwIvlgZwlvMtYv02FY3c1SL%2BUcEIPBsT1Gmv4bJ0Zy7xs57QJt%2BawM9cJgy5h0CUMuoShByPi%2Fp7y%2FR0xNM4JsVsEwpF9qTlLMMLHEmNFeWw33pDY6JmJvbLXtp5%2B2yXjhI8wgksoHh3iwriJA8RC2xef0Fhps7fijOWzqBBGzJwpW655WRNUfiJF8dlwdIhPQsvBfM%2FdBiYIIFOqrMZBQMuy8h8y4ZtpCNJMBDOqaDBn%2BOBXSKXwz8JTQi5zytML5C2L7bIRTQyniG3sIsPkMNXgdA1l%2BoRG0zma%2FiZC2uXmhH3n8HHX%2FeOM%2BrTeVLswNRuDAg3OUa67WPN2xTovQft7%2BLpwAs6u7n3YwbPmamfM323VTHlv9Q%2B7JiAl) (archived [here](https://gist.github.com/NunoSempere/715fd697ff3ebbb704e4a239e559d148)). The output unit is years of life saved. As is, it doesnt particularly correspond to the impact of any actual intervention, but hopefully, it could be a template that GiveWell could use, after some research. 
But for reference, the distributions impact looks as follows:
<img src="https://i.imgur.com/bejosHk.png" class='.img-medium-center'>
### Calculate optimal allocation given diminishing marginal values
Suppose that we have some diminishing marginal return functions. Then, we may want to estimate the optimal allocation to each opportunity.
We can express diminishing marginal returns functions using two possible syntaxes:
```
diminishingMarginalReturns1(funds) = 1/funds
diminishingMarginalReturns2 = {|funds| 1/(funds^2)}
```
The first syntax is more readable, but the second one can be used without a function definition, which is useful for manipulating functions as objects and defining them programmatically, as explained in this footnote ⤻ [\[1\]](#fnmxi7a8ll6t).
Once we have a few diminishing marginal return curves, we can put them in a list/array:
```
diminishingMarginalReturns1(funds) = 1/(100 + funds)
diminishingMarginalReturns2 = {|funds| 1/(funds^1.1)}
diminishingMarginalReturns3 = {|funds| 100/(1k + funds^1.5)}
diminishingMarginalReturns4 = {|funds| 200/(funds^2.2)}
diminishingMarginalReturns5 = {|funds| 2/(100*funds + 1)}
uselessDistribution(funds) = 0
negativeOpportunity(funds) = 0
listOfDiminishingMarginalReturns = [
 diminishingMarginalReturns1, diminishingMarginalReturns2,
 diminishingMarginalReturns3, diminishingMarginalReturns4,
 diminishingMarginalReturns5, uselessDistribution,
 negativeOpportunity, {|funds| {1/(1 + funds + funds^2)}}
]
```
And then we can specify our amount of funds;
```
availableFunds = 1M // dollars
calculationIncrement  = 1 // calculate dollar by dollar
Danger.optimalAllocationGivenDiminishingMarginalReturnsForManyFunctions(listOfDiminishingMarginalReturns, availableFunds, calculationIncrement)
```
So in this case, the difficulty comes not from applying a function, but from adding that function to Squiggle. This can be seen [here](https://github.com/quantified-uncertainty/squiggle/blob/develop/packages/squiggle-lang/src/rescript/FR/FR_Danger.res#L278).
Other software (e.g,. Python, R) could also do this, but the usefulness of the above comes from integrating that into Squiggle. For example, we could have an uncertain function produced by some other program, then take its mean (representing its expected value), and feed it to that calculator.
The [Survival and Flourishing Fund](https://survivalandflourishing.fund/) has some [software](https://youtu.be/jWivz6KidkI?t=487) to do something like this. It has a graphical interface which people can tweak, at the expense of being a bit more simple—their diminishing marginal returns are only determined by three points
### Defining a toy world
Lastly, we will define a simple toy world, which has some population growth and some economic growth, as well as some chance of extinction each year. And its value is defined as a function of the consumption of each person, times the chance that the world is still standing. 
For practical purposes, after some set point we stop calculations, and we calculate the remaining value as some function of the current value. We can understand this as either a) the heat death of the universe, or b) an arbitrary limit such that we are interested in the behaviour of the system as that limit goes to infinity, but we can only extend that limit with more computation.
This setup allows us to coarsely compare an increase in consumption vs an increase in economic growth vs a reduction in existential risk. In particular, given this setup, existential risk and economic growth would be valued less than in the infinite horizon case, so if their value is greater than some increase in consumption in this toy world, we will have reason to think that this would also be the case in the real world.
The code is a bit too large to simply paste into an EA Forum post, but it can be seen [here](https://github.com/quantified-uncertainty/squiggle-models/blob/master/toy-world/toy-world.squiggle). For a further tweak, you can see leaner code [here](https://github.com/quantified-uncertainty/squiggle-models/blob/master/toy-world/toy-world.squiggleU) which relies on the import functionality of the [squiggle-cli-experimental package](https://github.com/quantified-uncertainty/squiggle/tree/develop/packages/cli).
We can also look at the impact that various interventions have on our toy world, with further details [here](https://docs.google.com/spreadsheets/d/1WnplTYJJMeh0zXVUTPBaihE7n1kneW5LDidLvJcGcv4/edit?usp=sharing):
<img src='https://i.imgur.com/HCg2g5r.png' class='.img-medium-center'>
We see that of the sample interventions, increasing population growth by 0.5% has the highest impact. But 0.5%/year is a pretty large amount, and it would be pretty difficult to engineer. So further work could look at the relative difficulty of each of those interventions. Still, that table may serve to make a qualitative argument that interventions such as increasing population growth, economic growth, or reducing existential risk, are probably more valuable than directly increasing consumption.
## Conclusion
I presented a few more advanced Squiggle models. 
A running theme was that expressing estimates as functions—e.g., the chance of AGI at every point in time, the impact of an intervention for all possible ages, a list of diminishing marginal return functions for a list of interventions, a toy world with a population assigned some value at every point in time—might allow us to come up with better and more accurate estimates. Squiggle is not the only software that can do this, but hopefully it will make such estimation easier.
1. **[^](#fnrefmxi7a8ll6t)**
We can write:
```
listOfFunctions = [ {|funds| 1/(funds^2)},  {|funds| 1/(funds^3)}]
```
or even 
```
multiplyByI(i) = {|x| x*i}
listOfFunctions = List.map(List.upTo(0,10), {|i| multiplyByI(i)})
```
```
or without the need for a helper:
listOfFunctions2 = List.map(List.upTo(0,10), {|i| {|x| x*i}})
listOfFunctions2[4](2) // 4 * 2 = 8
```
This is standard functional programming stuff, and some functionality is missing from Squiggle, such as _List.length_ function. But still.

@ -0,0 +1,188 @@
Forecasting Newsletter: September 2022.
==============
## Highlights
* PredictIt vs Kalshi vs CFTC saga [continues](https://comments.cftc.gov/Handlers/PdfHandler.ashx?id=34691#?w=sapqmnxoxn)
* Future Fund announces [$1M+ prize](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize#comments) for arguments which shift their probabilities about AI timelines and dangers
* Dan Luu [looks at the track record of futurists](https://danluu.com/futurist-predictions/)
## Index
* Prediction Markets, Forecasting Platforms &co
* PredictIt, Kalshi & the CFTC
* Metaculus
* Manifold Markets
* Squiggle
* Odds and Ends
* Research
* Shortform
* Longform
Browse past newsletters [here](https://forecasting.substack.com/), or view this newsletter on substack [here.](https://forecasting.substack.com/p/forecasting-newsletter-september-57b) If you have a content suggestion or want to reach out, you can leave a comment or find me on [Twitter](https://twitter.com/NunoSempere).
## Prediction Markets and Forecasting Platforms
### PredictIt, Kalshi & the CFTC
<img src="https://i.imgur.com/6k0Oz4A.jpg" class='.img-medium-center'>
America, Land of the Free
Previously:
* Kalshi hired a former [CFTC commissioner](https://kalshi.com/blog/former-cftc-commissioner-brian-quintenz-joins-our-board) ([a](https://web.archive.org/web/20220201175613/https://kalshi.com/blog/former-cftc-commissioner-brian-quintenz-joins-our-board)).
* The CFTC [withdrew its no-action letter](https://www.cftc.gov/PressRoom/PressReleases/8567-22) ([a](https://web.archive.org/web/20220805010244/https://www.cftc.gov/PressRoom/PressReleases/8567-22)) from PredictIt
* Kalshi applied to the CFTC for permission to host a market on which party will control the US Congress after the 2022 mid-term elections. The CFTC [asked the public for comments](https://comments.cftc.gov/PublicComments/CommentList.aspx?id=7311) ([a](https://web.archive.org/web/20220828210656/https://comments.cftc.gov/PublicComments/CommentList.aspx?id=7311)) ([secondary source](https://www.politico.com/news/2022/09/05/voters-betting-elections-trading-00054723), ([a](https://web.archive.org/web/20220924141931/https://www.politico.com/news/2022/09/05/voters-betting-elections-trading-00054723))). 
Since then, on September the 9th [PredictIt sued the CFTC](https://www.jdsupra.com/legalnews/unpredictable-future-of-political-1333136/) ([a](http://web.archive.org/web/20220925015149/https://www.jdsupra.com/legalnews/unpredictable-future-of-political-1333136/)). Richard Hanania comments [why he is joining the lawsuit](https://richardhanania.substack.com/p/why-im-suing-the-federal-government) ([a](http://web.archive.org/web/20221001194707/https://richardhanania.substack.com/p/why-im-suing-the-federal-government)). 
Solomon Sia and Pratik Chougule—in collaboration with others like myself—wrote [this extremely thorough letter to the CFTC](https://comments.cftc.gov/Handlers/PdfHandler.ashx?id=34691#?w=sapqmnxoxn) ([a](https://web.archive.org/web/20221012143802/https://comments.cftc.gov/Handlers/PdfHandler.ashx?id=34691#w=sapqmnxoxn)), examining many aspects of the decision. 
There has been [a range of newspaper articles](https://news.google.com/search?q=PredictIt%20CFTC&hl=en-GB&gl=GB&ceid=GB%3Aen) ([a](https://archive.ph/uQEvL)) commenting on the PredictIt spat (e.g., [1](https://www.wsj.com/articles/why-wont-the-cftc-let-you-take-a-position-on-the-election-11582933734), [2](https://slate.com/business/2022/08/predictit-cftc-shut-down-politics-forecasting-gambling.html), [3](https://www.coindesk.com/policy/2021/10/28/the-cftc-vs-the-truth/), [4](https://www.chicagotribune.com/opinion/commentary/ct-opinion-political-prediction-markets-public-discourse-20220906-lfuvziy3fnfkfgw33lzhsno4h4-story.html), etc.), and on Kalshis. I particularly liked [this article](https://www.chicagotribune.com/opinion/commentary/ct-opinion-political-prediction-markets-public-discourse-20220906-lfuvziy3fnfkfgw33lzhsno4h4-story.html) ([a](http://web.archive.org/web/20220907164742/https://www.chicagotribune.com/opinion/commentary/ct-opinion-political-prediction-markets-public-discourse-20220906-lfuvziy3fnfkfgw33lzhsno4h4-story.html)) on the Chicago Tribune on how prediction markets are an antidote to degraded public discourse. 
Kalshi has an [interesting newsletter issue](https://www.kalshikit.co/p/obamas-cabinet-used-prediction-markets) ([a](https://web.archive.org/web/20221012110423/https://www.kalshikit.co/p/obamas-cabinet-used-prediction-markets)) in which they briefly report on how the Obama administration used prediction markets for their decision-making. Note that these would have probably been PredictIt's markets.
### Metaculus
Per their newsletter, Metaculus reached 1 million predictions. They have also [reorganized](https://nitter.privacy.com.de/fianxu/status/1569537636917825536) as a [public benefit corporation](https://en.wikipedia.org/wiki/Benefit_corporation) ([a](http://web.archive.org/web/20221001234507/https://en.wikipedia.org/wiki/Benefit_corporation)), i.e., a for-profit entity that aims to pursue some positive impact, as distinct from shareholder value. I think this leaves Metaculus in a better position, and decreases the (already pretty small) chance that Metaculus starts doing some damaging gatekeeping, etc.
Metaculus is also building an AI Forecasting team, and hiring for [a number of positions](https://apply.workable.com/metaculus/) ([a](http://web.archive.org/web/20220913093930/https://apply.workable.com/metaculus/)), growing its 12-person [strong team](https://www.metaculus.com/about/) ([a](http://web.archive.org/web/20220925082358/https://www.metaculus.com/about/)), presumably using its [2022 Open Philanthropy Grant](https://www.openphilanthropy.org/grants/metaculus-platform-development/) ([a](http://web.archive.org/web/20220929072721/https://www.openphilanthropy.org/grants/metaculus-platform-development/)).
### Manifold Markets
Manifold continued having a high development speed, e.g., they added a [Twitch bot](https://manifold.markets/twitch) ([a](http://web.archive.org/web/20221005181649/https://manifold.markets/twitch)) and ran their [first tournaments](https://manifold.markets/tournaments) ([a](https://web.archive.org/web/20221012144555/https://manifold.markets/tournaments)), which I was really glad to see. They have an experimental projects page at [manifold.markets/labs](https://manifold.markets/labs) ([a](http://web.archive.org/web/20221005182149/https://manifold.markets/labs)) And they have added a few reputational features:
> If a resolved market receives enough reports relative to the number of traders, it will be considered a “bad” market. Creators with enough bad markets will have a warning next to their name on any of their markets. This is just a first step towards reputational features which is a highly requested feature.
Manifold Markets removed and deprioritized their [numeric markets](https://news.manifold.markets/p/above-the-fold-updates-and-join-our) ([a](http://web.archive.org/web/20220908215157/https://news.manifold.markets/p/above-the-fold-updates-and-join-our)), citing difficulties in user usage. But from the post, the decision to do so seems like it was evaluated on the wrong grounds: It's not that numeric markets will immediately prove popular and intuitive, it's that experimenting with them is a public good that could unlock value in the medium term.
More generally, as Ive been seeing in these past few years, I think that there is a huge attractor of sports and wall-street-type bets. And new prediction-market startups tend to flirt with these a bit. I think this is a mistake, because its hard to differentiate oneself from competitors on the basis of better sports betting: traditional sports betting houses like Betfair in Europe or DraftKings in the US are already catering to a similar user base. Instead, my recommendation would be to target virgin communities, to which already existing betting houses dont already cater. 
You can also see their job board [here](https://www.notion.so/Manifold-Markets-Job-Board-e1b932b3bb2c4ec2b5a95865ec8f0f61) ([a](https://web.archive.org/web/20221012093824/https://www.notion.so/Manifold-Markets-Job-Board-e1b932b3bb2c4ec2b5a95865ec8f0f61)).
### Squiggle
[Squiggle](https://www.squiggle-language.com/#code=eNqrVirOyC8PLs3NTSyqVLIqKSpN1QELuaZkluQXQURqARlkDng%3D) is a web-capable language for manipulating probabilities and probability distributions that we at the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/) have been working on. In August, we announced a $1k [Squiggle experimentation prize](https://forum.effectivealtruism.org/posts/ZrWuy2oAxa6Yh3eAw/usd1-000-squiggle-experimentation-challenge), which has now been resolved. Winners are:
* 1st prize of $600 to [Tanae Rao](https://twitter.com/tanaerao?lang=en-GB) for [Adding Quantified Uncertainty to GiveWell's Cost Effectiveness Analysis of the Against Malaria Foundation](https://forum.effectivealtruism.org/posts/4Qdjkf8PatGBsBExK/adding-quantified-uncertainty-to-givewell-s-cost)
* 2nd prize of $300 to [Dan Wahl](https://danwahl.net/) for [CEA LEEP Malawi](https://forum.effectivealtruism.org/posts/BK7ze3FWYu38YbHwo/squiggle-experimentation-challenge-cea-leep-malawi)
* 3rd prize of $100 to [Erich Grunewald](https://www.erichgrunewald.com/posts/how-many-effective-altruist-billionaires-five-years-from-now/) for [How many EA billionaires five years from now?](https://forum.effectivealtruism.org/posts/Ze2Je5GCLBDj3nDzK/how-many-ea-billionaires-five-years-from-now)
Congrats! 
We also announced a larger [$5k challenge to quantify the impact of 80,000 hours' top career paths](https://forum.effectivealtruism.org/posts/noDYmqoDxYk5TXoNm/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top). I think that participation in this contest has a fairly high value, but also a fairly high expected monetary value: I invite readers to do a quick estimation, e.g.: the contest will have 3 to 15 participants, implying each participant will get between ~$300 and $1.6k.
I also wrote two posts introducing Squiggle: [Simple estimation examples in Squiggle](https://forum.effectivealtruism.org/posts/vh3YvCKnCBp6jDDFd/simple-estimation-examples-in-squiggle) and a follow-up at [Five slightly more hardcore Squiggle models.](https://forum.effectivealtruism.org/posts/BDXnNdBm6jwj6o5nc/five-slightly-more-hardcore-squiggle-models)
### Odds and Ends
The FTX Future Fund announces a [$1M+ prize](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize) ([a](https://web.archive.org/web/20221002051012/https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize)) for arguments that shift their probabilities around AGI timelines and dangers.
Friend of the newsletter Walter Frick has started a [newsletter](https://nonrival.pub/) ([a](https://web.archive.org/web/20221005181423/https://nonrival.pub/))  that combines analysis of a newsworthy topic with an invitation and a prompt for readers to forecast on a related event. The newsletter then reports readers forecasts and resolves them when time comes due. Readers might remember Walter from [his excellent coverage of the shutdown of Facebooks Forecast platform](https://qz.com/2069284/facebook-is-shutting-down-its-experimental-app-forecast/) ([a](https://web.archive.org/web/20220730061335/https://qz.com/2069284/facebook-is-shutting-down-its-experimental-app-forecast/)) at Quartz.
The [Autocast competition](https://forecasting.mlsafety.org/) ([a](http://web.archive.org/web/20221011173753/https://forecasting.mlsafety.org/)) offers $625k in prizes for improving the forecasting abilities of machine learning models. This builds on the [Autocast](https://arxiv.org/abs/2206.15474) ([a](http://web.archive.org/web/20220914001702/https://arxiv.org/abs/2206.15474)) paper. It might be that the contest has a connection to AI safety, but I'm not really seeing it. The deadline to submit results for the warmup round is February 10th.
Adam Sherman reports on his frustrations with the [UMA project](https://twitter.com/Squee451/status/1579647834957451264) ([a](https://archive.org/details/uma-unreliable-market-assumption-protocol)). These rhyme somewhat with previous complaints about [Kleros](https://deepfivalue.substack.com/p/the-kleros-experiment-has-failed) ([a](https://web.archive.org/web/20220701003955/https://deepfivalue.substack.com/p/the-kleros-experiment-has-failed)). Abstracting away from the specifics, the UMA oracle is a [Keynesian Beauty Contest](https://en.wikipedia.org/wiki/Keynesian_beauty_contest), meaning that consensus is valued over truth. In this case, a powerful but not dictatorial participant announced that he was going to vote one way, and because the protocol rewards people who vote with the consensus, he convinced others to vote with him. My sense is that a Keynesian Beauty Contest might still be a worthy tradeoff for some crypto protocols because of the added decentralization. But if too many of these events happen, the tradeoff might stop being worth it.
[Quantified Intuitions](https://www.quantifiedintuitions.org/) is an [epistemics training website](https://forum.effectivealtruism.org/posts/W6gGKCm6yEXRW5nJu/quantified-intuitions-an-epistemics-training-website). Readers might be familiar with the [pastcasting](https://www.pastcasting.com/) app, by the same group.
The Social Science prediction platform has [added some large-for-graduate-students forecaster incentives](https://socialscienceprediction.org/forecaster_incentives) ([a](http://web.archive.org/web/20220916011552/https://socialscienceprediction.org/forecaster_incentives)). They are offering $100 per 10 surveys completed—a survey is usually just a set of predictions that will be used in a future paper. I welcome this development. I used to view it as annoying that participation was restricted to graduate students and faculty. But the thought came to mind that restriction to academics is just a socially acceptable—if coarse—way of selecting for intelligence without saying as much.
Reddit has [r/polls/predictions](https://www.reddit.com/r/polls/predictions/) ([a](http://web.archive.org/web/20220709055805/https://www.reddit.com/r/polls/predictions/)), an embryonic implementation of a prediction market tournament inside Reddit. This builds on Reddit's past prediction functionality, as reported [previously](https://forecasting.substack.com/p/forecasting-newsletter-july-2021) ([a](http://web.archive.org/web/20211229170227/https://forecasting.substack.com/p/forecasting-newsletter-july-2021)) in [this newsletter](https://forecasting.substack.com/p/forecasting-newsletter-october-2021) ([a](http://web.archive.org/web/20220217162710/https://forecasting.substack.com/p/forecasting-newsletter-october-2021)). It would be useful to talk to whoever is building this functionality at Reddit. They probably have some different goals, more geared towards being a social media site. But some cross-pollination might still be interesting.
The Swift Centre has an analysis of [Biden's chances in the 2024 election](https://www.swiftcentre.org/can-biden-win-in-2024/) ([a](http://web.archive.org/web/20220916112924/https://www.swiftcentre.org/can-biden-win-in-2024/)). See also some other forecasts on [Metaforecast](https://metaforecast.org/?query=US+president) ([a](https://archive.ph/4n30X#from=https://metaforecast.org/?query=US+president)), e.g., on [Polymarket](https://polymarket.com/market/will-joe-biden-win-the-us-2024-democratic-presidential-nomination) ([a](http://web.archive.org/web/20220128214008/https://polymarket.com/market/will-joe-biden-win-the-us-2024-democratic-presidential-nomination)) or on [Betfair](https://www.betfair.com/exchange/plus/politics/market/1.178176964) ([a](http://web.archive.org/web/20210831231714/https://www.betfair.com/exchange/plus/politics/market/1.178176964)).
[Craze](https://www.ycombinator.com/companies/craze) ([a](https://web.archive.org/web/20221012093558/https://www.ycombinator.com/companies/craze)) is a Y-Combinator-funded company which brings predictions markets to India.
I was surprised to see that famous rapper Nicki Minaj has [partnered](https://maximbet.com/nicki-minaj) ([a](http://web.archive.org/web/20220531210904/https://maximbet.com/nicki-minaj)) with a [sports](https://nitter.privacy.com.de/nickiminaj/status/1531670747399065600) ([a](https://web.archive.org/web/20221012110813/https://nitter.privacy.com.de/nickiminaj/status/1531670747399065600)) betting [site](https://nitter.privacy.com.de/nickiminaj/status/1531670747399065600) ([a](https://web.archive.org/web/20221012110813/https://nitter.privacy.com.de/nickiminaj/status/1531670747399065600)). Curious
INFER continues to have small-money incentives for forecasters, and sending me [mildly cringy emails](https://i.imgur.com/j0Ar3BH.png) ([a](http://web.archive.org/web/20221012093838/https://i.imgur.com/j0Ar3BH.png)), and talking about a ["Global AI Race"](https://mailchi.mp/cultivatelabs/cset-foretell-launch-9372521) ([a](http://web.archive.org/web/20221012112754/https://mailchi.mp/cultivatelabs/cset-foretell-launch-9372521)). I'd continue to recommend it for university students, because it's one of the few sites that have a team functionality, though.
On Good Judgment Open, [Will Amazon.com begin to accept any cryptocurrency for purchases on the US site before 1 October 2022?](https://www.gjopen.com/questions/2090-will-amazon-com-begin-to-accept-any-cryptocurrency-for-purchases-on-the-us-site-before-1-october-2022) ([a](http://web.archive.org/web/20220529175114/https://www.gjopen.com/questions/2090-will-amazon-com-begin-to-accept-any-cryptocurrency-for-purchases-on-the-us-site-before-1-october-2022)) just resolved negatively. I remember it being at 30% a year ago. Crazy times.
## Research
### Shortform
Nostalgebraist looks at [AI forecasting one year in](https://nostalgebraist.tumblr.com/post/695521414035406848/on-ai-forecasting-one-year-in) ([a](http://web.archive.org/web/20220917144833/https://nostalgebraist.tumblr.com/post/695521414035406848/on-ai-forecasting-one-year-in)) and warns against taking it as a [stylized fact](https://en.wikipedia.org/wiki/Stylized_fact) ([a](http://web.archive.org/web/20220927235855/https://en.wikipedia.org/wiki/Stylized_fact)) that AI progress is going faster than forecasters expected.
[Samotsvety Forecasting](https://samotsvety.org/), my forecasting group, looks at the probability of [various AI catastrophes](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts) in the future, and at the [risk of a nuclear bomb being used](https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022) ([a](https://web.archive.org/web/20221012124008/https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022)) in the coming months (see also a [follow-up](https://forum.effectivealtruism.org/posts/8k9iebTHjdRCmzR5i/overreacting-to-current-events-can-be-very-costly) by Kelsey Piper).
<img src="https://i.imgur.com/rPuKJCg.png" class='.img-medium-center'>
Taken from [Polymarket](https://polymarket.com/market/will-russia-use-a-nuclear-weapon-before-2023). Note that money is worth less in the event of a nuclear war.
Some researchers at the University of Pennsylvania are [looking for forecasters to predict replication outcomes](https://nitter.privacy.com.de/rajtmajer_sarah/status/1573465138300059649) ([a](https://web.archive.org/web/20221012114519/https://nitter.privacy.com.de/rajtmajer_sarah/status/1573465138300059649)). They are paying a $20 base incentive and $25 per market. This is low in absolute terms, but high if you enjoy doing this kind of thing anyways. h/t Ago Lajko.
Richard Hanania argues that [the problem with polling might be unfixable](https://richardhanania.substack.com/p/the-problem-with-polling-might-be/), i.e,. that Republican nonresponse bias might be very hard to estimate. I left a comment with some suggestions, but I agree that the situation [looks grim](https://richardhanania.substack.com/p/the-problem-with-polling-might-be/comment/9327296) ([a](http://web.archive.org/web/20220927183755/https://richardhanania.substack.com/p/the-problem-with-polling-might-be/comment/9327296)).
[Two](https://www.lesswrong.com/posts/YQ8H4e7z3q8ngev7J/raising-the-forecasting-waterline-part-1) ([a](http://web.archive.org/web/20220710073545/https://www.lesswrong.com/posts/YQ8H4e7z3q8ngev7J/raising-the-forecasting-waterline-part-1)) old [posts](https://www.lesswrong.com/posts/YEKHh5nyqhpE3E4Bm/raising-the-forecasting-waterline-part-2) ([a](http://web.archive.org/web/20220927155721/https://www.lesswrong.com/posts/YEKHh5nyqhpE3E4Bm/raising-the-forecasting-waterline-part-2)) from ten years ago look at the lessons learnt by someone who was participating in the IARPA forecasting tournament which led to the Superforecasting book.
### Longform
Dan Luu looks at the track record of futurists, and finds that their track record is generally poor. Readers of this newsletter should [read the post](https://danluu.com/futurist-predictions/) ([a](https://archive.ph/WJEBd#from=https://danluu.com/futurist-predictions/)).
For some background points:
* The AI safety community has been advocating that future artificial intelligence systems (AI) might be so intelligent as to be world-ending dangers.
* Open Philanthropy, a large foundation, is giving some weight to AI safety, and has been donating large amounts of money to that cause.
* As part of their decision-making, Open Philanthropy commissioned research by [friends of the newsletter Arb research](https://arbresearch.com/) ([a](http://web.archive.org/web/20221011153414/https://arbresearch.com/)) on the [track record of the three biggest science-fiction authors of the 20th century](https://arbresearch.com/files/big_three.pdf) ([a](http://web.archive.org/web/20220711161231/https://arbresearch.com/files/big_three.pdf)) (Asimov, Heinlein and Clarke)
* The CEO of Open Philanthropy later [used that analysis](https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/) ([a](http://web.archive.org/web/20220914130350/https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/)), as well as other research Open Philanthropy had been doing, to justify and explain Open Philanthropy's investments in AI Safety.
In his own analysis of futurists' track record, Dan Luu seems to point out that this process has some characteristics of a shit show. Here is a long extract from Luu's post, minimally edited for readability:
> We've seen, when evaluating futurists with an eye towards evaluating longtermists, Karnofsky heavily rounds up in the same way Kurzweil and other futurists do, to paint the picture they want to create. 
>
> There's also the matter of his summary of a report on Kurzweil's predictions being incorrect because he didn't notice the author of that report used a methodology that produced nonsense numbers that were favorable to the conclusion that Karnofsky favors. 
>
> It's true that Karnofsky and the reports he cites do the superficial things that the forecasting literature notes is associated with more accurate predictions, like stating probabilities. But for this to work, the probabilities need to come from understanding the data. 
>
> If you take a pile of data, incorrectly interpret it and then round up the interpretation further to support a particular conclusion, throwing a probability on it at the end is not likely to make it accurate. 
>
> Although he doesn't use these words, a key thing Tetlock notes in his work is that people who round things up or down to conform to a particular agenda produce low accuracy predictions. Since Karnofsky's errors and rounding heavily lean in one direction, that seems to be happening here.
> We can see this in other analyses as well. Although digging into material other than futurist predictions is outside of the scope of this post, nostalgebraist has done this and he said (in a private communication that he gave me permission to mention) that Karnofsky's summary of [Could Advanced AI Drive Explosive Economic Growth?](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/) is substantially more optimistic about AI timelines than the underlying report in that there's at least one major concern raised in the report that's not brought up as a "con" in Karnofsky's summary.
>
> And nostalgebraist later wrote [this post](https://nostalgebraist.tumblr.com/post/693718279721730048/on-bio-anchors) ([a](http://web.archive.org/web/20221004024842/https://nostalgebraist.tumblr.com/post/693718279721730048/on-bio-anchors)), where he (implicitly) notes that the methodology used in a report he examined in detail is fundamentally not so different than what the futurists we discussed used. There are quite a few things that may make the report appear credible (it's hundreds of pages of research, there's a complex model, etc.), but when it comes down to it, the model boils down to a few simple variables. 
>
> In particular, a huge fraction of the variance of whether or not TAI is likely or not likely comes down to the amount of improvement will occur in terms of hardware cost, particularly FLOPS/$. The output of the model can range from 34% to 88% depending how much improvement we get in FLOPS/$ after 2025. Putting in arbitrarily large FLOPS/$ amounts into the model, i.e., the scenario where infinite computational power is free (since other dimensions, like storage and network aren't in the model, let's assume that FLOPS/$ is a proxy for those as well), only pushes the probability of TAI up to 88%, which I would rate as too pessimistic, although it's hard to have a good intuition about what would actually happen if infinite computational power were on tap for free. 
>
> Conversely, with no performance improvement in computers, the probability of TAI is 34%, which I would rate as overly optimistic without a strong case for it. But I'm just some random person who doesn't work in AI risk and hasn't thought about too much, so your guess on this is as good as mine (and likely better if you're the equivalent of Yegge or Gates and work in the area).
I'm sympathetic to both sides of this. 
On the one hand, I worry that the side concerned about AI safety acts like a machine that predictably surfaces and amplifies arguments in favor of its side, and predictably discounts arguments for the other side. 
On the other hand, I also see Luu's analysis as perhaps too harsh, e.g.:
* not giving partial credit for predictions that are missed by a few years or that only happen in rich countries rather than worldwide,
* considering predictions that have a "may" as unfalsifiable (instead of e.g., assigning a probability of 50% and looking at the resulting Brier or log score), 
* evaluating two propositions connected by an "and" as one failed prediction instead of one correct and one incorrect prediction.
* evaluating predictions about the "twenty-first century" as having already failed
* generally being on the harsh side of things
Overall, it seems like there is a garden of forking paths with regards to the more specific question of how accurate past futurists were, but also with regards to the more general question about the degree to which it is possible to make predictions about future events, particularly about transformative technologies. 
One way to navigate that garden of forking paths would be an [adversarial collaboration](https://en.wikipedia.org/wiki/Adversarial_collaboration) ([a](http://web.archive.org/web/20220725190412/https://en.wikipedia.org/wiki/Adversarial_collaboration)). Funding for this would probably be available, if not from Open Philanthropy itself then from [the FTX Future Fund](https://ftxfuturefund.org/) ([a](http://web.archive.org/web/20221011034322/https://ftxfuturefund.org/)), from [Nonlinear](https://www.super-linear.org/#list2) ([a](https://web.archive.org/web/20221012112602/https://www.super-linear.org/#list2)), or even from [myself](https://nitter.privacy.com.de/NunoSempere). I mention funding because I personally view cold hard cash as an honest signal that some work is truly perceived to be valuable. But one could also choose to carry out an adversarial collaboration pro bono, for the sake of curiosity, etc.
[Price Formation in Field Prediction Markets](https://arxiv.org/abs/2209.08778) is an arxiv preprint which discusses where the accuracy of prediction markets comes from. The two hypotheses it considers are:
1. from averaging the different pieces of information that each participant has
2. from traders which are able to individually do more research than everyone else, and profit from this.
They have a method I'm not completely convinced by in order to identify "price sensitive" traders, whom they identify with informed traders, and they use their dataset to conclude that hypothesis 2 is mostly whats going on. They use data from [Almanis](https://www.almanisprivate.com/) ([a](http://web.archive.org/web/20220202051215/https://www.almanisprivate.com/)), one of the smaller prediction market sites that still have some liquidity.
The paper has some interesting elements. And for all I know, it's better than 99% of the papers in its field. But I'm left with the impression that the topic of research is a bit of a bad fit for academic investigation, because one could get a better idea of the dynamics of prediction markets by listening to the [Star Spangled Gamblers](https://starspangledgamblers.com/) ([a](http://web.archive.org/web/20221001143818/https://starspangledgamblers.com/)) guys.
---
Note to the future: All links are added automatically to the Internet Archive, using this [tool](https://github.com/NunoSempere/longNowForMd) ([a](http://web.archive.org/web/20220711161908/https://github.com/NunoSempere/longNowForMd)). "(a)" for archived links was inspired by [Milan Griffes](https://www.flightfromperfection.com/) ([a](http://web.archive.org/web/20220814131834/https://www.flightfromperfection.com/)), [Andrew Zuckerman](https://www.andzuck.com/) ([a](http://web.archive.org/web/20220316214638/https://www.andzuck.com/)), and [Alexey Guzey](https://guzey.com/) ([a](http://web.archive.org/web/20220901135024/https://guzey.com/)).
---
> — What are you waiting for?
> — I don't know... Something amazing, I guess.
> — Me too, kid
[The Incredibles](https://en.wikipedia.org/wiki/The_Incredibles), 30'50''

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

@ -1,90 +0,0 @@
Legalize acetylcysteine: An open letter to the UK's MHRA
========================================================
## Part I: Demagoguery
This is the map of maximum Celtic expansion, in circa 270 BC, per [Wikipedia](https://upload.wikimedia.org/wikipedia/commons/0/08/Celtic_expansion_in_Europe.svg):
![](https://upload.wikimedia.org/wikipedia/commons/0/08/Celtic_expansion_in_Europe.svg)
Since then, the Spaniards have further developed into Gazpacho-drinking siesta-sleepers and the Britons have developed into tea-drinking weather-contemplators[^1]. Still, my understanding is that population differences are to a great degree cultural, and that the basic plumbing remains pretty much the same. My understanding is also that
Imagine, then, my surprise, when in the middle of being sick in the UK, I find out that an extremely common medicine used to treat the cold in Spain throughout my childhood just wasn't commonly available in the UK. This medicine is [acetylcysteine](https://en.wikipedia.org/wiki/Acetylcysteine)—known in Spain under the brand name "Fluomicil". It's purpose is to decrease the thickness of the mucus so that it can be expelled, so that the patient can better breathe. In my experience, this is particularly crucial at night, because if the nose is blocked, you will breathe through the mouth and end up having a sore throat, and generally not sleep as well.
Instead of using acetylcysteine, the UK uses other less efficaceous medicaments, such as nose sprays, which don't work as well through the night. They aren't as useful once the nose is already blocked. And they are more annoying to use, which means that people may forget or use them less.
## Part II: Cost-effectiveness analysis
I work as a forecaster, not as a doctor or as a medical researcher. So there are surely factors I'm missing. For instance, maybe living for two milenia under lousy weather has maybe made the population of Britain more immune to having blocked noses, and this could mean that nose sprays are a better tradeoff than acetylcysteine. I really wouldn't know, though it would surprise me.
Still, as a forecaster I can offer the following estimation:
Per the [NHS inform website](https://www.nhsinform.scot/illnesses-and-conditions/infections-and-poisoning/common-cold#colds-in-children):
> Children get colds far more often than adults. While adults usually have two to four colds a year, children can catch as many as 8 to 12.
According to the [latest data from the Office of National Statistics](https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/bulletins/annualmidyearpopulationestimates/mid202), the population pyramid of the UK looks as follows:
![](./population.png)
meaning that there are 67,081,234 people, of which 20.1% (13,468,262) is under 16. I also estimate that acetylcysteine makes an illness somewhere between 1% and 10% better.
Putting this together, I can [estimate the following](https://www.squiggle-language.com/playground#code=eNqFkV9rwjAUxb%2FKpbChnfPfnAPBR5%2FGXiZ7C4TYxBpIb7r0dlLE775E7aa1c2%2Fl9OR3TnJ2UbGx22WZZcJV0YxcqXoHaSE1WVcrGjVpYZafpU5To5bkNKbRLGI4GMCiIJ0JUrAqnVQIdg1SF0oUimFu89II0ha5XfOPV5jD9OXN687m1h30ZKONdP7cHIb98XAEHjke9kd3DMmSMFzI0hBPrJEFz5XjlRLOmztjIAuTLsRwlRJDZwSP0JLSram10AKeBPBo%2FAe5BfrDDChPuNH7AW7GM5Sl%2B8kL%2F8KLHfuEh5GiKuq08M23mja177xEDE3QeclD1m%2FTBmkAT9Nnhpfj6iwXCYVxRaKoMklVkNKowGLb8N7u7JfKFNKR3DgVxvZb%2B4v5qRmmQmPByfKV4htxeZlT2Rj%2BYZ4avysqHcJ96JIbUTHcMYQr3uxK6QVbo8isKXjTnmG0%2FwYbRy4m):
```
// Estimate burden of disease
population_of_UK = 67M
proportion_children = 0.201 // 20.1%
total_adult_colds_per_year = (2 to 4) * population_of_UK * (1 - proportion_children)
total_children_colds_per_year = (4 to 12) * population_of_UK * proportion_children
total_colds = total_adult_colds_per_year + total_children_colds_per_year
duration_of_cold = 6 to 12 // days
total_days_with_cold = total_colds * duration_of_cold
total_cold_years = total_days_with_cold / 365
// Estimate impact of acetylcysteine on burden of disease
improvement_with_acetylcysteine = 0.01 to 0.1
gains_to_be_had = total_cold_years * improvement_with_acetylcysteine
// Return & display
{
total_cold_years: total_cold_years,
gains_to_be_had: gains_to_be_had,
}
```
That is, I arrive at an estimate of 6M (1.7M to 9.2M) cummulative person-years spent having a cold in Britain:
![](./cold_years_per_year.png)
and a potential improvement from adopting acetylcysteine of 250,000 (53,000 to 640,000) "quality-adjusted-sickness-years"—an intutitive, ad-hoc unit that I just made up:
![](./gains-to-be-had.png)
The weakness of the method is that my subjective estimates of the 1% to 10% quality of life improvement might be off, or that my estimates of how often people are sick might be inaccurate—6M years of cold per year does seem a bit high. I'm also not really familiar with how potential alternatives, such as carbocisteine, are used in the UK. Still, I think that this rough calculation does show that having better medicaments is of great importance. And the Spanish doctors I've spoken expressed shock and disbelief that acetylcysteine was not available in the UK.
But while an abstract argument may have been made, the action and followup remains. And it falls on the brave and hardworking souls at the [MHRA](https://www.gov.uk/government/organisations/medicines-and-healthcare-products-regulatory-agency) to send [a Message to Garcia](https://courses.csail.mit.edu/6.803/pdf/hubbard1899.pdf): Legalize acetylcysteine.
---
Sources
https://www.nhs.uk/medicines/carbocisteine/#:~:text=A%20mucolytic%20helps%20you%20cough,chronic%20obstructive%20pulmonary%20disease%20(COPD)
https://www.cochrane.org/CD003124/ARI_acetylcysteine-and-carbocysteine-to-treat-acute-upper-and-lower-respiratory-tract-infections-in-children-without-chronic-broncho-pulmonary-disease
https://www.medicines.org.uk/emc/product/2916/smpc
https://www.gov.uk/government/organisations/medicines-and-healthcare-products-regulatory-agency
https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/bulletins/annualmidyearpopulationestimates/mid202
https://www.boots.com/sitesearch?searchTerm=mucolytics
https://www.drugs.com/drug-interactions/acetylcysteine-with-fenesin-dm-97-0-846-8166.html
https://www.amazon.co.uk/N-Acetyl-Cysteine-Nutritional-Supplements/b?ie=UTF8&node=5977697031
https://www.dbth.nhs.uk/wp-content/uploads/2017/10/Patient-Information-Leaflet-Acetylcysteine.pdf
https://bnf.nice.org.uk/drugs/acetylcysteine/
https://www.waymade.co.uk/contact-us/
https://www.medicines.org.uk/emc/product/11366/smpc
https://www.medicines.org.uk/emc/product/11366/smpc
https://www.boots.com/boots-pharmaceuticals-effervescent-powder-10-sachets-10049868
https://www.waymade.co.uk/wp-content/uploads/2021/05/PIL-Acetylcysteine-200mg.pdf
https://mhraproducts4853.blob.core.windows.net/docs/1a9869bfeac300ad6cb531ff4434f1cf90243624
https://products.mhra.gov.uk/search/?search=acetylcysteine&page=1&ter=UK&rerouteType=0
[^1]: As in, "Isn't the weather nice today, darling?"

@ -1,95 +0,0 @@
Legalize acetylcysteine: An open letter to the UK's MHRA
========================================================
Executive summary: Acetylcysteine is a common medicine used in Spain without prescription that I believe is a better alternative than the medications used in the UK to relieve some symptoms of the flu. There is a legal framework for importing it from Europe, but it onerous enough that I'm not going to personally do it, so the idea is there for the taking. A few companies have bothered to go through the process already, so it might make sense to partner with them. It might also be valuable to streamline the process of importing medicines into the UK from the EU, but this seems harder.
## Part I: Demagoguery
This is the map of maximum Celtic expansion, in circa 270 BC, per [Wikipedia](https://upload.wikimedia.org/wikipedia/commons/0/08/Celtic_expansion_in_Europe.svg):
![](https://upload.wikimedia.org/wikipedia/commons/0/08/Celtic_expansion_in_Europe.svg)
Since then, the Spaniards have further developed into Gazpacho-drinking siesta-sleepers and the Britons have developed into tea-drinking weather-contemplators[^1]. Still, my understanding is that population differences are to a great degree cultural, and that the basic plumbing remains pretty much the same[^2].
Imagine, then, my surprise, when in the middle of being sick in the UK, I find out that an extremely common medicine used to treat the cold in Spain throughout my childhood just wasn't commonly available in the UK. This medicine is [acetylcysteine](https://en.wikipedia.org/wiki/Acetylcysteine)—known in Spain under the brand name "Fluimicil"[^3]. It's purpose is to decrease the thickness of the mucus so that it can be expelled, so that the patient can better breathe. In my experience, this is particularly crucial at night, because if the nose is blocked, you will breathe through the mouth and end up having a sore throat, and generally not sleep as well.
Instead of using acetylcysteine, the UK uses other less efficaceous medicaments, such as nose sprays, which don't work as well through the night. They aren't as useful once the nose is already blocked. And they are more annoying to use, which means that people may forget them completely—or just use them less. Brits also have access to [Carbocysteine](https://www.nhs.uk/medicines/carbocisteine/#:~:text=A%20mucolytic%20helps%20you%20cough,chronic%20obstructive%20pulmonary%20disease%20), though only with a prescription, and in practice it doesn't seem to be standard of care.
## Part II: Cost-effectiveness analysis
I work as a forecaster, not as a doctor or as a medical researcher. So there are surely factors I'm missing. For instance, maybe living for two milenia under lousy weather has maybe made the population of Britain more immune to having blocked noses, and this could mean that nose sprays are a better tradeoff than acetylcysteine. I really wouldn't know, though it would surprise me.
Still, as a forecaster I can offer the following estimation:
Per the [NHS inform website](https://www.nhsinform.scot/illnesses-and-conditions/infections-and-poisoning/common-cold#colds-in-children):
> Children get colds far more often than adults. While adults usually have two to four colds a year, children can catch as many as 8 to 12.
According to the [latest data from the Office of National Statistics](https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/timeseries/ukpop/pop), the population pyramid of the UK looks as follows:
![](./population.png)
meaning that there are 67,081,234 people, of which 20.1% (13,468,262) is under 16. I also estimate that acetylcysteine makes an illness somewhere between 1% and 10% better.
Putting this together, I can [estimate the following](https://www.squiggle-language.com/playground#code=eNqFkV9rwjAUxb%2FKpbChnfPfnAPBR5%2FGXiZ7C4TYxBpIb7r0dlLE775E7aa1c2%2Fl9OR3TnJ2UbGx22WZZcJV0YxcqXoHaSE1WVcrGjVpYZafpU5To5bkNKbRLGI4GMCiIJ0JUrAqnVQIdg1SF0oUimFu89II0ha5XfOPV5jD9OXN687m1h30ZKONdP7cHIb98XAEHjke9kd3DMmSMFzI0hBPrJEFz5XjlRLOmztjIAuTLsRwlRJDZwSP0JLSram10AKeBPBo%2FAe5BfrDDChPuNH7AW7GM5Sl%2B8kL%2F8KLHfuEh5GiKuq08M23mja177xEDE3QeclD1m%2FTBmkAT9Nnhpfj6iwXCYVxRaKoMklVkNKowGLb8N7u7JfKFNKR3DgVxvZb%2B4v5qRmmQmPByfKV4htxeZlT2Rj%2BYZ4avysqHcJ96JIbUTHcMYQr3uxK6QVbo8isKXjTnmG0%2FwYbRy4m):
```
// Estimate burden of disease
population_of_UK = 67M
proportion_children = 0.201 // 20.1%
total_adult_colds_per_year = (2 to 4) * population_of_UK * (1 - proportion_children)
total_children_colds_per_year = (4 to 12) * population_of_UK * proportion_children
total_colds = total_adult_colds_per_year + total_children_colds_per_year
duration_of_cold = 6 to 12 // days
total_days_with_cold = total_colds * duration_of_cold
total_cold_years = total_days_with_cold / 365
// Estimate impact of acetylcysteine on burden of disease
improvement_with_acetylcysteine = 0.01 to 0.1
gains_to_be_had = total_cold_years * improvement_with_acetylcysteine
// Return & display
{
total_cold_years: total_cold_years,
gains_to_be_had: gains_to_be_had,
}
```
That is, I arrive at an estimate of 6M (1.7M to 9.2M) cummulative person-years per year spent having a cold in Britain:
![](./cold_years_per_year.png)
and a potential improvement from adopting acetylcysteine of 250,000 (53,000 to 640,000) "quality-adjusted-sickness-years"—an intutitive, ad-hoc unit that I just made up:
![](./gains-to-be-had.png)
The weakness of the method is that my subjective estimates of the 1% to 10% quality of life improvement might be off, or that my estimates of how often people are sick might be inaccurate—6M years of cold per year does seem a bit high. I'm also not really familiar with how potential alternatives, such as carbocisteine, are used in the UK. Still, I think that this rough calculation does show that having better medicaments is of great importance. And the Spanish doctors I've spoken expressed shock and disbelief that acetylcysteine was not available in the UK.
One particular way my estimate could be wrong is if patients are taking carbocysteine instead of acetylcysteine, and if the two medicaments closely resemble each other. If that is the case, the above estimates might be much lower. Still,, they still point to the broader correct point that really nailing standard of care for the flu is likely to be very valuable.
But while an abstract argument may have been made, the action and followup remains. And it falls on the brave and hardworking souls at the [MHRA](https://www.gov.uk/government/organisations/medicines-and-healthcare-products-regulatory-agency) to send [a Message to Garcia](https://courses.csail.mit.edu/6.803/pdf/hubbard1899.pdf): Legalize acetylcysteine.
## Part III: The invisible hand defeated
But in fact, acetylcysteine is already legal in the UK. Well, pseudo-legal. Quasi-legal. Legal in name, but not legal enough for the invisible hand of the market to do its work.
By this I mean that you could in theory sell acetylcysteine if you have a number of licenses which look very annoying to get. Per the MHRA's website:
> If you want to parallel import a product you must make sure that:
> - the product is manufactured to [good manufacturing practice (GMP) standards](https://www.gov.uk/guidance/good-manufacturing-practice-and-good-distribution-practice)
> - you hold a [wholesale dealers licence](https://www.gov.uk/guidance/apply-for-manufacturer-or-wholesaler-of-medicines-licences) covering importing, storage and sale for each product
> - you hold the correct parallel import licence
>
> To assemble and repackage the product you will also need to have an [manufacturers licence](https://www.gov.uk/guidance/apply-for-manufacturer-or-wholesaler-of-medicines-licences) covering product assembly.
You know what this prevents me from doing? This prevents me from buying 1 000 packages of acetylcysteine and selling them to friends on the side, and then relying on word of mouth. I would have been the invisible hand of the market, if only I hadn't been stymied by government regulations.
In fact, regulations aren't so bad. It seems conceivable that I could figure these requirements out during a summer. Though I'm probably not going to, so this idea is free for the taking. In practice, there are already [a few companies](https://products.mhra.gov.uk/search/?search=acetylcysteine&page=1&ter=UK&rerouteType=0) that have gone through the trouble, like [Waymade](https://www.waymade.co.uk/), and so it might make more sense to partner with them.
And yet, the situation remains suboptimal. Ideally, the regulatory framework of the UK would be such that importing medicines from the EU would be painless. But that would be a much larger project.
[^1]: As in, "Isn't the weather nice today, darling?"
[^2]: My understanding is that in general there are *some* differences in the efficacy of medical treatments across ethnic group. I previously knew that [lactose intolerance](https://en.wikipedia.org/wiki/Lactose_intolerance#Frequency) is more common across people of East Asian descent. And some brief Googling leads me to a few papers on the topic ([1](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2594139/), [2](https://www.degruyter.com/document/doi/10.1515/DMDI.1995.12.2.77/html), [3](https://www.tandfonline.com/doi/abs/10.1517/17425255.2011.585969). So it's conceivable that that consideration is part of what is going on. But it would surprise me.
[^3] Also Normofludil, etc.

Binary file not shown.

After

Width:  |  Height:  |  Size: 768 KiB

@ -0,0 +1,23 @@
Sometimes you give to the commons, and sometimes you take from the commons
==========================================================================
Sometimes you give to the commons, and sometimes you trade from the commons. And through this giving and taking, people are able to smooth consumption. This is good because getting more ressources from the commons when you temporarily have fewer of them is more positive than giving ressources away when you temporarily have more of them. <figure> <img src="https://i.imgur.com/eiwMyEI.jpg"><br><figcaption> Engraving depicting the curse of [Tantalus](https://en.wikipedia.org/wiki/Tantalus) </figcaption> </figure>
Anyways, a phenomenon I've noticed is that sometimes, you can only give to the commons, but you can't take from the commons. This is dysfunctional, and defeats the whole purpose of the commons.
Some examples, vaguely based on real life:
- You generally have thoughtful opinions, but sometimes you make mistakes. Your aggregate effect is to make a group's models of the world better. One day you have an opinion that is wrong, and people pile on against you, without remembering previous times that you added information to the shared pool.
- You generally give emotional support to people. But when you need emotional support, people don't give it to you.
- You are glad to help people with your time, but when you need other people to lend you their time, they don't.
- There is a shared pool of ressources that status-poor people are expected to fill, and high-status people are welcome to partake of.
- Taking from the commons is socially punished, such that people *can't even think* of the idea of taking from the commons as an option that they have.
Overall, there might be reasons for these kinds of dynamics. For example:
- maybe there are types of people who would predictably take too much from the commons, and a group prevents those kinds of people from taking any part of the commons, as a preventative measure. Maybe people can smell the desesperation.
- Or maybe there was a veil-of-ignorance type of deal going on, where some people only give to the commons, but would have received if they had had worse luck.
- Or maybe there is a totally reasonable period between where one starts giving to the commons and when one can start taking from it, to disallow free-riders.
But in practice, I think that the reasonable explanations aren't what's going on. And instead there are really weird effects where "for he that hath, to him shall be given: and he that hath not, from him shall be taken even that which he hath". So now, when I see this kind of dynamic around a supposed commons, I tend to run. And after seeing this kind of dynamic happening a few times, I've become more sympathetic about a cluster of ideas around self-sufficiency and libertarianism.

@ -0,0 +1,144 @@
Brief evaluations of top-10 billionnaires
==============
As part of my work with the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/), I am experimenting with speculative evaluations that could be potentially scalable. Billionaires were an interesting evaluation target because there are a fair number of them, and at least some ate nominally aiming to do good.
For now, for each top 10 billionaire, I have tried to get an idea of:
1. How much value have they created through their business activities?
2. How much impact have they created through their philanthropic activities
I then assigned a subjective score based on my understanding of the answers to the above questions.
### Elon Musk (B)
Elon Musk changes the world through:
* His businesses: Tesla, SpaceX, The Boring Company and Neuralink. Tesla makes better cars, SpaceX advances interplanetary expansion.
* His cultural influence: Twitter shitposting, conceiving and pushing for brighter futures, etc.
* His philanthropy: Little of it is publicly known so far. He will probably end up buying Twitter, partially with the intention of making it a better public good. OpenAI, which he helped found, might end up having greatly positive or greatly negative impact.
Overall Musk seems like has produced large amounts of value, and might produce even more through SpaceX. But he also seems to be oddly nonstrategic at times.
### Bernard Arnault (D-)
> I see myself as an ambassador of French heritage and French culture. What we create is emblematic. It's linked to Versailles, to Marie Antoinette. -- [Bernard Arnault, as quoted in Forbes](https://www.forbes.com/profile/bernard-arnault/?sh=2d3799e966fa)
His business produces little counterfactual value, and instead serves as a vehicle for conspicuous consumption. That is, if his luxury brands didn't exist, his customers would simply buy from the others, and the world would look extremely similar.
His company describes its philanthropy as ["an ideal expression of financial success"](https://www.lvmh.com/group/lvmh-commitments/art-culture/lvmh-corporate-philanthropy/), and supports art installations, or students attending concerts. Thus his philanthropy seems non-strategic, aimed at the display of wealth rather than at the firm pursuit of improving and fortifying French culture. He also donated $200M to restore the Notre Dame, which probably saved French tax-payers a similar amount.
### Jeff Bezos (B)
The market value of Amazon is circa $1T, meaning that it has managed to capture at least that much value, and likely produced much more consumer surplus. His other ventures, like Blue Origin, are as of yet nowhere as valuable.
### Gautam Adani (B?)
Adani seems to be a skilled manager, administrator, and deal-maker who, working in a developing country, has unlocked heaps of value.
He has ties to Narenda Modi, and their fortunes have risen together. From the outside, it's hard to say to what extent he has [created his wealth](https://nairametrics.com/2022/05/11/how-gautam-adani-went-from-being-a-school-dropout-to-becoming-the-richest-man-in-india/):
> The governor of Gujarat announced managerial outsourcing of the Mundra Port in 1994, and Adani got the contract in 1995, after which he set up the first jetty, which was originally run by Mundra Port & Special Economic Zone but was later transferred to Adani Ports and SEZ (APSEZ).
> Adani decided to turn it into a commercial port, building rail and road links to it by individually negotiating with over 50 0 landowners across India to create the largest port in India.
or [simply aquired it](https://asiatimes.com/2022/05/gautam-adani-master-of-the-art-of-modern-monopolies/):
> Mr Adanis friendship with Prime Minister Narendra Modi is time-tested. Their friendship goes back to 2003, when none of the countrys leading businessmen publicly stood by Modis side because of the handling of the Gujarat riots. But Adani broke ranks with the old business elite, potentially risking his future. And this gamble paid off.
> Gautam Adani is today one of the most visible tycoons in the country, whose prominence has accelerated in the years since Narendra Modi was elected prime minister in 2014. Since Modi came into office, Adanis net worth has increased 17.5 times in less than eight years, from $7 billion to $125 billion
### Bill Gates (A)
It's unclear whether Microsoft itself has had a positive or negative impact on the world over what would have counterfactually happened (e.g., Apple and Linux would be more popular). However, Gates' impact-focused philanthropy has helped millions. He moreover started the [Giving Pledge](https://givingpledge.org/), which probably multiplied his impact. Too bad that he couldn't prevent the covid pandemic.
### Warren Buffett (A)
Berkshire Hathaway is probably a force for good in the capitalist ecosystem. He has also contributed $32+ billion to the [Gates Foundation](https://www.gatesfoundation.org/about/leadership/warren-buffett). In addition, Buffett created the Giving Pledge, which probably multiplied his impact.
### Larry Ellison (C?)
Ellison has made his wealth by selling [universally-reviled database software](https://libreddit.foss.wtf/r/business/comments/di5j2/im_always_surprised_to_see_the_oracle_chieflarry/) and other products that work at Fortune 500 and government scale.
He has signed the [Giving Pledge](https://givingpledge.org/pledger?pledgerId=192), though his giving may have been [erratic at times](https://www.vox.com/recode/2020/9/2/21409530/larry-ellison-foundation-disband-london-philanthropy-coronavirus). It's also possible that his closeness to Trump at times improved the quality of Trump's decision-making while in office.
### Mukesh Ambani (B?)
His wealth originally came from a vertically integrated commodity business, but has since expanded. Although skilled at navigating government bureaucracies, he also ate his own brother alive in the competitive communications business, providing millions of Indian consumers with cheaper internet access. Overall most of his impact is going to come from his contribution to Indian economic growth, and that contribution is probably highly positive.
### Larry Page (B)
By making a better search engine and providing other Google products for free to millions, he has provided heaps of value. However, in recent times, he has disengaged from Google, and Google has abandoned its "don't be evil" motto. His philanthropy, while [large](https://www.vox.com/recode/2019/12/18/21010108/larry-page-philanthropy-foundation-donor-advised-fund-christmas), is somewhat secretive.
### Sergei Brin (B-)
Like Larry Page, by making a better search engine and providing other Google products for free to millions, he has provided heaps of value. However, in recent times, he has disengaged from Google, and Google has abandoned its "don't be evil" motto. He has donated at least [$1.4 billion](https://www.influencewatch.org/non-profit/sergey-brin-family-foundation/) to his family foundation, and seems to donate to left-of-center causes.
## Reflections
### Comparisons with other alternatives
From some brief Googling, two other rankings are the [Forbes 400](https://www.forbes.com/forbes-400), which assigns a philanthropy score to America's 400 richest people, and the [philanthropy 50](https://www.philanthropy.com/article/the-philanthropy-50/#id=browse_2021), which is paywalled.
**Forbes' Philanthropy score**
The methodology for the Forbes 400 philanthropy score can be seen [here](https://www.forbes.com/sites/rachelsandler/2022/09/27/the-forbes-philanthropy-score-2022-how-charitable-are-the-richest-americans/?sh=587daeea0980). In short, Forbes does some [intensive investigative work](https://www.forbes.com/sites/chasewithorn/2022/09/27/2022-forbes-400-methodology-how-we-crunch-the-numbers/?sh=1f88cfe5d0eb) to determine what billionaire's wealth actually _is_. Then,
> To see how philanthropic the ultrawealthy are, Forbes dug into their known charitable giving and assigned a philanthropy score, ranging from 1 to 5, to each member of The Forbes 400. If we couldnt find any information about a persons giving and they declined to provide details, they received a score of N/A.
> To calculate the scores, we added the value of each persons total out-the-door lifetime giving to their 2022 Forbes 400 net worth, then divided their lifetime giving by that number. Each score corresponds to a range of giving as a percentage of a persons net worth. We once again counted only out-the-door giving, rather than cash sitting in billionaires private foundations or tax-advantaged donor-advised funds that have not yet made it to those in need. We reached out to every list member for feedback
This is already fairly sophisticated. If I had to suggest one improvement, it would be to incorporate whether billionaires have signed the [Giving Pledge](https://givingpledge.org/).
Personally, I would also:
* Score individuals on the _amount_ of money donated, rather than on the _percentage_
* Accommodate [patient philanthropy](https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/) (see also [1](https://docs.google.com/document/d/1NcfTgZsqT9k30ngeQbappYyn-UO4vltjkm64n4or5r4/edit), [2](https://globalprioritiesinstitute.org/wp-content/uploads/Trammell-Dynamic-Public-Good-Provision-under-Time-Preference-Heterogeneity.pdf)), and not look only at money out the door.
### Possible further work
If I had access to a legion of researchers, I would try to move first towards a legible rubric and then to a quantified impact estimate.
**An initial rubric**
An initial rubric might incorporate:
Some subjective estimate of how much value the individual has created through business
* Are the business activities more like value creation or like resource extraction
* How much value has the individual created?
Some mechanistic estimate of how much value the individual will create through philanthropy
* How much money will the individual end up donating?
* How much has the individual donated so far?
* Has the individual joined the [Giving Pledge](https://givingpledge.org/)?
* Are the individual's donations done with some reference to impact?
* This would require some finesse in order to incorporate different philosophical stances. But there is certainly a substantial difference between Bill Gates' and Bernard Arnault's giving.
Possibly, some estimate of additional sources of impact, like cultural influence or using a position of prominence to positively impact the world.
Crucially, the above categories could be complementary. For instance, a skilled administrator and industrialist like Mukesh Ambani is already creating heaps of value through business in India, and he probably creates more value through deploying his capital through business than he would through philanthropy. So an individual could get top marks by being excellent in any one domain.
**A quantified estimate**
Eventually, a quantified estimate might move beyond being a rubric and directly attempt to estimate each part of an individual's impact, and then put them all together in a common linear unit.
For example, in the case of Elon Musk, I would estimate how valuable each of his ventures is, either in an impact unit like [Open Philanthropy dollars](https://www.openphilanthropy.org/research/update-on-our-planned-allocation-to-givewells-recommended-charities-in-2022/#f+9715+1+6)—$1 dollar given to someone earning $50k a year—or in terms of [relative values](https://forum.effectivealtruism.org/posts/9hQFfmbEiAoodstDA/simple-comparison-polling-to-create-utility-functions)—where you compare how much each element is worth to other elements, and you don't need a unit or can easily construct one once you've done that.
### Things I personally struggled with
Some billionaires were harder to estimate than others. I particularly struggled with Gautam Adani and Mukesh Ambani. I'm probably lacking a whole lot of context there. Thanks to Chinmay Ingalavi for giving me some context.
I am also uncertain about Larry Ellison. [Here](https://teddit.nunosempere.com/r/linux/comments/2e2c1o/what_do_we_hate_oracle_for/) is a thread on shady Oracle corporate practices. But [here](https://givingpledge.org/pledger?pledgerId=192) is Ellison's Giving Pledge letter. I'm unclear on how to square the two.
The whole exercise took longer than I was expecting.
I'm also unclear on whether to use gossip and private information, and ended up not doing so.
I was also unclear on which philosophical assumptions to use. For instance,
* I'm partial to [Patient Philanthropy](https://docs.google.com/document/d/1NcfTgZsqT9k30ngeQbappYyn-UO4vltjkm64n4or5r4/edit)
* I think it's plausible that most of a billionaires impact could come from business rather than from philanthropy.
* I think that Amazon's [union busting](https://www.commondreams.org/news/2022/10/18/following-brutal-union-busting-campaign-albany-amazon-workers-reject-unionization) is an evil practice but not nearly enough to move the needle on my overall evaluation of Amazon overall having produced very large heaps of value.
* I didn't incorporate Mackenzie Bezos' giving into Jeffrey Bezos' estimate, although one could argue that he created a big chunk of that wealth.

@ -0,0 +1,79 @@
Are flimsy evaluations worth it?
================================
I recently received a bit of grief over a [brief evaluations of the impact of the top-10 billionnaires](https://nunosempere.com/blog/2022/10/21/brief-evaluations-of-top-10-billionnaires/). It seems possible that this topic is worth discussing. In what follows I outline a few non-exhaustive considerations, as well as a few questions of interest.
<figure><img src="https://imgs.xkcd.com/comics/duty_calls.png" class="img-frontpage-center"><br><figcaption>"Duty Calls"
, by <a href="https://xkcd.com/386/">xkcd</a></figcaption></figure>
### Value of flimsy evaluations
Right now, I see the value of flimsy evaluations or estimations as coming from:
#### 1. Value of experimentation
There are many things we don't have estimates or evaluations for. Trying different evaluation methods and topics can be informative about which are more valuable. Individual flimsy evaluations can serve as a proof of concept that can be built upon if the preliminary version appears valuable, and as testing grounds for new evaluation methodologies.
#### 2. Flimsy evaluations considered better than no evaluation
When estimating a probability or a quantity, sometimes a quick BOTEC (back of the envelope calculation) or a Fermi estimate might be worth having despite its imprecision, because there isn't time or it isn't worth the effort to conjure a more complex estimate.
For evaluations, oftentimes the tradeoff isn't between a flimsy evaluation and a more accurate in-depth evaluation, but rather between a flimsy evaluation and no evaluation at all.
In particular, I don't think that the case of a ranking of billionnaires was that important. But the case of evaluations of EA organizations is. For example, a longstanding [annual evaluation of AI safety organizations](https://forum.effectivealtruism.org/posts/qdKhLcJmGQuYmzBoz/larks-s-shortform?commentId=e4h2yjCrK9kncfGTf) by Larks is not happening partly because it would be too expensive to produce. But then in that case we are getting no evaluation rather than an flimsier evaluation.
#### 3. Less sure: The world being complicated enough that epistemics is for now a community effort
I consider myself a reasonably knowledgeable individual, but I still regularly read things in the EA Forum and elsewhere that surprise me. Similarly, when forecasting, one usually gets a better result when combining different individual perspectives.
Adjacently, [Cunningham's law](https://meta.wikimedia.org/wiki/Cunningham%27s_Law) states that:
> the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer
So it doesn't seem crazy that for a given number of hours of research, a better answer can be found by posting a flimsy evaluation and relying on commenters to point flaws that would have been hard for the author to identify on their own.
This feels true, but too adversarial for my taste. If I was relying on this, I would explicitly signpost it.
### Disvalue of flimsy evaluation
#### 1. Reduced epistemics
In a previous post, a commenter mentioned:
> I think posting this was probably net negative EV but it was really funny
> ...
> Your methodology looks pretty flimsy but it looks like other EAs are taking it seriously
> ...
> I think the harm from posting things with flimsy methodology and get a lot of upvotes/uncritical comments is something like "lower epistemic rigour on the forum in general", rather than this article in particular causing a great deal of harm. I think the impact of this article whether positive or negative is likely to be small.
It's possible that factors such as these could be present for flimsy evaluations.
#### 2. People and organizations are really touchy about evaluations
People and organizations tend to get a bit angsty when being evaluated. I think this is a real cost. I also think that generally, it's a cost worth paying for communities to have better models of the world. But for very flimsy evaluations, it's very possible that the cost is just not worth paying.
#### 3. Evaluations having some chance of error
Evaluations have some rate of error that rises the flimsier they are. It's possible that negative errors are fairly harmful, e.g., by reducing an organization's ability to fundraise through no fault of their own.
### Discussion
Some questions:
1. In which context are flimsy evaluations worth it?
2. How should one signal that an evaluation could be flimsy?
3. Is there inflation of words going on? Open Philanthropy uses "shallow evaluations" for documents that can be a bit comprehensive
4. What is the expected error rate before it's not worth publishing a flimsy evaluation? 1 in 20 seems to low, 1 in 2 too high.
### Personal thoughts
Perhaps one likely conclusion could be that flimsy evaluations might be valuable if they clearly signal how much research has gone into them, and give an accurate impression of how flimsy they are.
One possible way of doing this would be to have a prediction about what the expected error rate is. For instance, one could have a prediction like: "I expect that there is a 5% chance of an eggregious error that switches the main conclusion, and 1 to 4 minor errors that flip secondary considerations".
---
<section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section>

@ -0,0 +1,815 @@
{
"type": "excalidraw",
"version": 2,
"source": "https://excalidraw.com",
"elements": [
{
"id": "Pc5feaq_JnEdvwFbg2zUJ",
"type": "text",
"x": 1389.8411874421743,
"y": 617.3498328855867,
"width": 382,
"height": 86,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 352403785,
"version": 40,
"versionNonce": 439047561,
"isDeleted": false,
"boundElements": [
{
"id": "q2LQ1_8aE0Xd3iDkkgGhk",
"type": "arrow"
},
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow"
}
],
"updated": 1667216595390,
"link": null,
"locked": false,
"text": "Producing value\nthrough estimation",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 78,
"containerId": null,
"originalText": "Producing value\nthrough estimation"
},
{
"id": "q2LQ1_8aE0Xd3iDkkgGhk",
"type": "arrow",
"x": 1783.8411874421743,
"y": 658.6307474855544,
"width": 151,
"height": 147.88865624811058,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 361072617,
"version": 66,
"versionNonce": 1900487849,
"isDeleted": false,
"boundElements": null,
"updated": 1667216801151,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
151,
-147.88865624811058
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "XzAQTmjDXOWXvZnxZbbkQ",
"gap": 3,
"focus": 0.760368781285116
},
"endBinding": {
"elementId": "zqG3dshmWa6mCYUQCz1fE",
"gap": 5.000000000000114,
"focus": 0.6780509456760299
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"id": "Z0004-gPnOVQrkPks2pwC",
"type": "text",
"x": 1954.8411874421743,
"y": 491.3498328855867,
"width": 107,
"height": 43,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 520412489,
"version": 8,
"versionNonce": 708821191,
"isDeleted": false,
"boundElements": [
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow"
}
],
"updated": 1667216590598,
"link": null,
"locked": false,
"text": "Value",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 35,
"containerId": null,
"originalText": "Value"
},
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow",
"x": 1780.944767092466,
"y": 661.8498328855869,
"width": 235.18182064649818,
"height": 146.9999999999999,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 207575335,
"version": 378,
"versionNonce": 1645480743,
"isDeleted": false,
"boundElements": null,
"updated": 1667216784484,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
235.18182064649818,
146.9999999999999
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "XzAQTmjDXOWXvZnxZbbkQ",
"focus": -0.7013390579674785,
"gap": 1
},
"endBinding": {
"elementId": "zwoIstgdfgu1MdzN_RD26",
"focus": 0.38897657551470216,
"gap": 3.9999999999998863
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"id": "1cljcwLQack7UNQHhFO8b",
"type": "text",
"x": 1913.841187442174,
"y": 823.3498328855867,
"width": 233,
"height": 86,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1227036745,
"version": 94,
"versionNonce": 1121040583,
"isDeleted": false,
"boundElements": null,
"updated": 1667216699887,
"link": null,
"locked": false,
"text": "Estimation \ncapacity",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 78,
"containerId": null,
"originalText": "Estimation \ncapacity"
},
{
"id": "XzAQTmjDXOWXvZnxZbbkQ",
"type": "rectangle",
"x": 1362.8411874421743,
"y": 604.8498328855867,
"width": 418,
"height": 120,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1749063463,
"version": 51,
"versionNonce": 1326367369,
"isDeleted": false,
"boundElements": [
{
"id": "q2LQ1_8aE0Xd3iDkkgGhk",
"type": "arrow"
},
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow"
},
{
"id": "oDXmWG9mbfTEZiRh-D4tO",
"type": "arrow"
}
],
"updated": 1667216752128,
"link": null,
"locked": false
},
{
"id": "zqG3dshmWa6mCYUQCz1fE",
"type": "rectangle",
"x": 1939.8411874421743,
"y": 467.8498328855867,
"width": 142.00000000000023,
"height": 95,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 846291335,
"version": 80,
"versionNonce": 1824087943,
"isDeleted": false,
"boundElements": [
{
"id": "q2LQ1_8aE0Xd3iDkkgGhk",
"type": "arrow"
},
{
"id": "z6mLCcbd1CkCmAAuoKTku",
"type": "arrow"
}
],
"updated": 1667216663757,
"link": null,
"locked": false
},
{
"id": "zwoIstgdfgu1MdzN_RD26",
"type": "rectangle",
"x": 1891.841187442174,
"y": 812.8498328855867,
"width": 267.0000000000002,
"height": 112,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1474506247,
"version": 105,
"versionNonce": 1140472231,
"isDeleted": false,
"boundElements": [
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow"
},
{
"id": "GwlYsgDcNAOp9yIXsHVIv",
"type": "arrow"
},
{
"id": "oDXmWG9mbfTEZiRh-D4tO",
"type": "arrow"
}
],
"updated": 1667216747064,
"link": null,
"locked": false
},
{
"id": "z6mLCcbd1CkCmAAuoKTku",
"type": "arrow",
"x": 2084.6675088056236,
"y": 517.5253239801759,
"width": 131,
"height": 2,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 94916809,
"version": 23,
"versionNonce": 706328649,
"isDeleted": false,
"boundElements": null,
"updated": 1667216837581,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
131,
-2
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "zqG3dshmWa6mCYUQCz1fE",
"focus": 0.06797737074681244,
"gap": 2.826321363448983
},
"endBinding": {
"elementId": "hJvbFbdaz9bOkf3rYsz1Y",
"focus": 0.5178941227067932,
"gap": 15
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"id": "_e_Q9y32aOP4PvMBJFHig",
"type": "rectangle",
"x": 2211.6675088056236,
"y": 442.56852695751786,
"width": 705.0000000000001,
"height": 243.16517692560052,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1572044905,
"version": 135,
"versionNonce": 1575721319,
"isDeleted": false,
"boundElements": null,
"updated": 1667216847889,
"link": null,
"locked": false
},
{
"id": "hJvbFbdaz9bOkf3rYsz1Y",
"type": "text",
"x": 2230.6675088056236,
"y": 460.982121002834,
"width": 676,
"height": 215,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1285379433,
"version": 178,
"versionNonce": 1739820231,
"isDeleted": false,
"boundElements": [
{
"id": "GwlYsgDcNAOp9yIXsHVIv",
"type": "arrow"
},
{
"id": "z6mLCcbd1CkCmAAuoKTku",
"type": "arrow"
}
],
"updated": 1667216837581,
"link": null,
"locked": false,
"text": "Lessons learnt,\nspecific things to point to,\nvarious positive feedback loops,\ncourse correction opportunities,\npersonal motivation",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 207,
"containerId": null,
"originalText": "Lessons learnt,\nspecific things to point to,\nvarious positive feedback loops,\ncourse correction opportunities,\npersonal motivation"
},
{
"id": "GwlYsgDcNAOp9yIXsHVIv",
"type": "arrow",
"x": 2529.5266616543026,
"y": 697.9634294630696,
"width": 367.66068607087755,
"height": 169.04320297734193,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 589945927,
"version": 191,
"versionNonce": 559896999,
"isDeleted": false,
"boundElements": null,
"updated": 1667216837331,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
-36.160686070877546,
155.04320297734193
],
[
-367.66068607087755,
169.04320297734193
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "hJvbFbdaz9bOkf3rYsz1Y",
"focus": 0.024628586787011717,
"gap": 21.981308460235596
},
"endBinding": {
"elementId": "zwoIstgdfgu1MdzN_RD26",
"focus": 0.06363853014251168,
"gap": 3.0247881412506104
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"type": "arrow",
"version": 473,
"versionNonce": 696857639,
"isDeleted": false,
"id": "oDXmWG9mbfTEZiRh-D4tO",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 2020.364448940089,
"y": 929.7822314192581,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 503,
"height": 331.04892385152516,
"seed": 1651043431,
"groupIds": [],
"strokeSharpness": "round",
"boundElements": [],
"updated": 1667216790125,
"link": null,
"locked": false,
"startBinding": {
"elementId": "zwoIstgdfgu1MdzN_RD26",
"focus": -0.2265230629828577,
"gap": 4.932398533671403
},
"endBinding": {
"elementId": "XzAQTmjDXOWXvZnxZbbkQ",
"focus": 0.3100069262560157,
"gap": 10.932398533671403
},
"lastCommittedPoint": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"points": [
[
0,
0
],
[
-100.03770202400597,
137.04892385152516
],
[
-333.5,
132
],
[
-467.0511133121588,
-11.612255475320922
],
[
-503,
-194
]
]
},
{
"id": "KjFY0EsVB4gtcK0TyPHCT",
"type": "rectangle",
"x": 1595.390746034829,
"y": 1727.3114052242645,
"width": 287.4154563432544,
"height": 177.2147114240788,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 806107207,
"version": 88,
"versionNonce": 1350197799,
"isDeleted": false,
"boundElements": [
{
"id": "MLoNn8uvBd1hvXQiMOV9t",
"type": "arrow"
},
{
"id": "sEwrU6XV6gh159-hWjLUH",
"type": "arrow"
}
],
"updated": 1667216974949,
"link": null,
"locked": false
},
{
"id": "y3GLDl9E97u24BkxLZAjX",
"type": "text",
"x": 1619.2179341254619,
"y": 1775.3361724735037,
"width": 234,
"height": 86,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1637206473,
"version": 24,
"versionNonce": 1942015399,
"isDeleted": false,
"boundElements": null,
"updated": 1667216922662,
"link": null,
"locked": false,
"text": "Estimation\nexperiments",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 78,
"containerId": null,
"originalText": "Estimation\nexperiments"
},
{
"type": "rectangle",
"version": 134,
"versionNonce": 1906481481,
"isDeleted": false,
"id": "ZnMeue0q7LLKTiWs1YIlQ",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 2063.7439119413248,
"y": 1717.6316100624451,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 287.4154563432544,
"height": 177.2147114240788,
"seed": 1079521065,
"groupIds": [],
"strokeSharpness": "sharp",
"boundElements": [
{
"id": "sEwrU6XV6gh159-hWjLUH",
"type": "arrow"
}
],
"updated": 1667216978964,
"link": null,
"locked": false
},
{
"type": "text",
"version": 101,
"versionNonce": 2066463719,
"isDeleted": false,
"id": "jnt4vZY0UXwHVBUivo-vX",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 2087.571100031957,
"y": 1765.6563773116839,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 212,
"height": 86,
"seed": 870609895,
"groupIds": [],
"strokeSharpness": "sharp",
"boundElements": [],
"updated": 1667216942963,
"link": null,
"locked": false,
"fontSize": 36,
"fontFamily": 3,
"text": "Estimation\ncapacity",
"baseline": 78,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "Estimation\ncapacity"
},
{
"id": "MLoNn8uvBd1hvXQiMOV9t",
"type": "arrow",
"x": 1754.7350663909342,
"y": 1716.9546493967666,
"width": 442.2921789323659,
"height": 139.71645836039474,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 104441607,
"version": 216,
"versionNonce": 2066305769,
"isDeleted": false,
"boundElements": null,
"updated": 1667216966952,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
40.2083799029424,
-134.86519244510302
],
[
406.55139679641707,
-136.80569881121988
],
[
442.2921789323659,
2.9107595491748555
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "KjFY0EsVB4gtcK0TyPHCT",
"focus": -0.08151849707325239,
"gap": 10.356755827497864
},
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"type": "arrow",
"version": 411,
"versionNonce": 1284388617,
"isDeleted": false,
"id": "sEwrU6XV6gh159-hWjLUH",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 3.1285814082294436,
"x": 1766.6437336651911,
"y": 2076.4898590470993,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 442.0273496167363,
"height": 150.2058754407422,
"seed": 372313545,
"groupIds": [],
"strokeSharpness": "round",
"boundElements": [],
"updated": 1667216990531,
"link": null,
"locked": false,
"startBinding": {
"elementId": "ZnMeue0q7LLKTiWs1YIlQ",
"focus": -0.1611498222365172,
"gap": 11.379014759210804
},
"endBinding": {
"elementId": "KjFY0EsVB4gtcK0TyPHCT",
"focus": 0.005831788722677658,
"gap": 17.938912827562035
},
"lastCommittedPoint": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"points": [
[
0,
0
],
[
39.94355058731276,
-148.26536907462534
],
[
406.2865674807874,
-150.2058754407422
],
[
442.0273496167363,
-10.489417080347465
]
]
}
],
"appState": {
"gridSize": null,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

@ -0,0 +1,815 @@
{
"type": "excalidraw",
"version": 2,
"source": "https://excalidraw.com",
"elements": [
{
"id": "Pc5feaq_JnEdvwFbg2zUJ",
"type": "text",
"x": 1389.8411874421743,
"y": 617.3498328855867,
"width": 382,
"height": 86,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 352403785,
"version": 40,
"versionNonce": 439047561,
"isDeleted": false,
"boundElements": [
{
"id": "q2LQ1_8aE0Xd3iDkkgGhk",
"type": "arrow"
},
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow"
}
],
"updated": 1667216595390,
"link": null,
"locked": false,
"text": "Producing value\nthrough estimation",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 78,
"containerId": null,
"originalText": "Producing value\nthrough estimation"
},
{
"id": "q2LQ1_8aE0Xd3iDkkgGhk",
"type": "arrow",
"x": 1783.8411874421743,
"y": 658.6307474855544,
"width": 151,
"height": 147.88865624811058,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 361072617,
"version": 66,
"versionNonce": 1900487849,
"isDeleted": false,
"boundElements": null,
"updated": 1667216801151,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
151,
-147.88865624811058
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "XzAQTmjDXOWXvZnxZbbkQ",
"gap": 3,
"focus": 0.760368781285116
},
"endBinding": {
"elementId": "zqG3dshmWa6mCYUQCz1fE",
"gap": 5.000000000000114,
"focus": 0.6780509456760299
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"id": "Z0004-gPnOVQrkPks2pwC",
"type": "text",
"x": 1954.8411874421743,
"y": 491.3498328855867,
"width": 107,
"height": 43,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 520412489,
"version": 8,
"versionNonce": 708821191,
"isDeleted": false,
"boundElements": [
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow"
}
],
"updated": 1667216590598,
"link": null,
"locked": false,
"text": "Value",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 35,
"containerId": null,
"originalText": "Value"
},
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow",
"x": 1780.944767092466,
"y": 661.8498328855869,
"width": 235.18182064649818,
"height": 146.9999999999999,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 207575335,
"version": 378,
"versionNonce": 1645480743,
"isDeleted": false,
"boundElements": null,
"updated": 1667216784484,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
235.18182064649818,
146.9999999999999
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "XzAQTmjDXOWXvZnxZbbkQ",
"focus": -0.7013390579674785,
"gap": 1
},
"endBinding": {
"elementId": "zwoIstgdfgu1MdzN_RD26",
"focus": 0.38897657551470216,
"gap": 3.9999999999998863
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"id": "1cljcwLQack7UNQHhFO8b",
"type": "text",
"x": 1913.841187442174,
"y": 823.3498328855867,
"width": 233,
"height": 86,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1227036745,
"version": 94,
"versionNonce": 1121040583,
"isDeleted": false,
"boundElements": null,
"updated": 1667216699887,
"link": null,
"locked": false,
"text": "Estimation \ncapacity",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 78,
"containerId": null,
"originalText": "Estimation \ncapacity"
},
{
"id": "XzAQTmjDXOWXvZnxZbbkQ",
"type": "rectangle",
"x": 1362.8411874421743,
"y": 604.8498328855867,
"width": 418,
"height": 120,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1749063463,
"version": 51,
"versionNonce": 1326367369,
"isDeleted": false,
"boundElements": [
{
"id": "q2LQ1_8aE0Xd3iDkkgGhk",
"type": "arrow"
},
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow"
},
{
"id": "oDXmWG9mbfTEZiRh-D4tO",
"type": "arrow"
}
],
"updated": 1667216752128,
"link": null,
"locked": false
},
{
"id": "zqG3dshmWa6mCYUQCz1fE",
"type": "rectangle",
"x": 1939.8411874421743,
"y": 467.8498328855867,
"width": 142.00000000000023,
"height": 95,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 846291335,
"version": 80,
"versionNonce": 1824087943,
"isDeleted": false,
"boundElements": [
{
"id": "q2LQ1_8aE0Xd3iDkkgGhk",
"type": "arrow"
},
{
"id": "z6mLCcbd1CkCmAAuoKTku",
"type": "arrow"
}
],
"updated": 1667216663757,
"link": null,
"locked": false
},
{
"id": "zwoIstgdfgu1MdzN_RD26",
"type": "rectangle",
"x": 1891.841187442174,
"y": 812.8498328855867,
"width": 267.0000000000002,
"height": 112,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1474506247,
"version": 105,
"versionNonce": 1140472231,
"isDeleted": false,
"boundElements": [
{
"id": "mQ4gpJdiY1Rz3DF4LAAMB",
"type": "arrow"
},
{
"id": "GwlYsgDcNAOp9yIXsHVIv",
"type": "arrow"
},
{
"id": "oDXmWG9mbfTEZiRh-D4tO",
"type": "arrow"
}
],
"updated": 1667216747064,
"link": null,
"locked": false
},
{
"id": "z6mLCcbd1CkCmAAuoKTku",
"type": "arrow",
"x": 2084.6675088056236,
"y": 519.0686731850794,
"width": 131,
"height": 0.8159780627706823,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 94916809,
"version": 77,
"versionNonce": 1934157447,
"isDeleted": false,
"boundElements": null,
"updated": 1667225292980,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
131,
0.8159780627706823
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "zqG3dshmWa6mCYUQCz1fE",
"focus": 0.06797737074681244,
"gap": 2.826321363448983
},
"endBinding": {
"elementId": "hJvbFbdaz9bOkf3rYsz1Y",
"focus": 0.5178941227067932,
"gap": 15
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"id": "_e_Q9y32aOP4PvMBJFHig",
"type": "rectangle",
"x": 2211.6675088056236,
"y": 442.56852695751786,
"width": 705.0000000000001,
"height": 286.35195533987184,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1572044905,
"version": 148,
"versionNonce": 1608855657,
"isDeleted": false,
"boundElements": null,
"updated": 1667225299443,
"link": null,
"locked": false
},
{
"id": "hJvbFbdaz9bOkf3rYsz1Y",
"type": "text",
"x": 2230.6675088056236,
"y": 460.982121002834,
"width": 676,
"height": 258,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1285379433,
"version": 205,
"versionNonce": 2120417193,
"isDeleted": false,
"boundElements": [
{
"id": "GwlYsgDcNAOp9yIXsHVIv",
"type": "arrow"
},
{
"id": "z6mLCcbd1CkCmAAuoKTku",
"type": "arrow"
}
],
"updated": 1667225292979,
"link": null,
"locked": false,
"text": "Lessons learnt,\nspecific things to point to,\nvarious positive feedback loops,\ncourse correction opportunities,\npersonal motivation,\neasier to excite others",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 250,
"containerId": null,
"originalText": "Lessons learnt,\nspecific things to point to,\nvarious positive feedback loops,\ncourse correction opportunities,\npersonal motivation,\neasier to excite others"
},
{
"id": "GwlYsgDcNAOp9yIXsHVIv",
"type": "arrow",
"x": 2521.5563605127095,
"y": 740.9634294630695,
"width": 359.6903849292844,
"height": 126.04320297734205,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 589945927,
"version": 218,
"versionNonce": 1292692327,
"isDeleted": false,
"boundElements": null,
"updated": 1667225292980,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
-28.19038492928439,
112.04320297734205
],
[
-359.6903849292844,
126.04320297734205
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "hJvbFbdaz9bOkf3rYsz1Y",
"focus": 0.024628586787011717,
"gap": 21.981308460235596
},
"endBinding": {
"elementId": "zwoIstgdfgu1MdzN_RD26",
"focus": 0.06363853014251168,
"gap": 3.0247881412506104
},
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"type": "arrow",
"version": 473,
"versionNonce": 696857639,
"isDeleted": false,
"id": "oDXmWG9mbfTEZiRh-D4tO",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 2020.364448940089,
"y": 929.7822314192581,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 503,
"height": 331.04892385152516,
"seed": 1651043431,
"groupIds": [],
"strokeSharpness": "round",
"boundElements": [],
"updated": 1667216790125,
"link": null,
"locked": false,
"startBinding": {
"elementId": "zwoIstgdfgu1MdzN_RD26",
"focus": -0.2265230629828577,
"gap": 4.932398533671403
},
"endBinding": {
"elementId": "XzAQTmjDXOWXvZnxZbbkQ",
"focus": 0.3100069262560157,
"gap": 10.932398533671403
},
"lastCommittedPoint": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"points": [
[
0,
0
],
[
-100.03770202400597,
137.04892385152516
],
[
-333.5,
132
],
[
-467.0511133121588,
-11.612255475320922
],
[
-503,
-194
]
]
},
{
"id": "KjFY0EsVB4gtcK0TyPHCT",
"type": "rectangle",
"x": 1595.390746034829,
"y": 1727.3114052242645,
"width": 287.4154563432544,
"height": 177.2147114240788,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 806107207,
"version": 88,
"versionNonce": 1350197799,
"isDeleted": false,
"boundElements": [
{
"id": "MLoNn8uvBd1hvXQiMOV9t",
"type": "arrow"
},
{
"id": "sEwrU6XV6gh159-hWjLUH",
"type": "arrow"
}
],
"updated": 1667216974949,
"link": null,
"locked": false
},
{
"id": "y3GLDl9E97u24BkxLZAjX",
"type": "text",
"x": 1619.2179341254619,
"y": 1775.3361724735037,
"width": 234,
"height": 86,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "sharp",
"seed": 1637206473,
"version": 24,
"versionNonce": 1942015399,
"isDeleted": false,
"boundElements": null,
"updated": 1667216922662,
"link": null,
"locked": false,
"text": "Estimation\nexperiments",
"fontSize": 36,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"baseline": 78,
"containerId": null,
"originalText": "Estimation\nexperiments"
},
{
"type": "rectangle",
"version": 134,
"versionNonce": 1906481481,
"isDeleted": false,
"id": "ZnMeue0q7LLKTiWs1YIlQ",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 2063.7439119413248,
"y": 1717.6316100624451,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 287.4154563432544,
"height": 177.2147114240788,
"seed": 1079521065,
"groupIds": [],
"strokeSharpness": "sharp",
"boundElements": [
{
"id": "sEwrU6XV6gh159-hWjLUH",
"type": "arrow"
}
],
"updated": 1667216978964,
"link": null,
"locked": false
},
{
"type": "text",
"version": 101,
"versionNonce": 2066463719,
"isDeleted": false,
"id": "jnt4vZY0UXwHVBUivo-vX",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 0,
"x": 2087.571100031957,
"y": 1765.6563773116839,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 212,
"height": 86,
"seed": 870609895,
"groupIds": [],
"strokeSharpness": "sharp",
"boundElements": [],
"updated": 1667216942963,
"link": null,
"locked": false,
"fontSize": 36,
"fontFamily": 3,
"text": "Estimation\ncapacity",
"baseline": 78,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "Estimation\ncapacity"
},
{
"id": "MLoNn8uvBd1hvXQiMOV9t",
"type": "arrow",
"x": 1754.7350663909342,
"y": 1716.9546493967666,
"width": 442.2921789323659,
"height": 139.71645836039474,
"angle": 0,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"strokeSharpness": "round",
"seed": 104441607,
"version": 216,
"versionNonce": 2066305769,
"isDeleted": false,
"boundElements": null,
"updated": 1667216966952,
"link": null,
"locked": false,
"points": [
[
0,
0
],
[
40.2083799029424,
-134.86519244510302
],
[
406.55139679641707,
-136.80569881121988
],
[
442.2921789323659,
2.9107595491748555
]
],
"lastCommittedPoint": null,
"startBinding": {
"elementId": "KjFY0EsVB4gtcK0TyPHCT",
"focus": -0.08151849707325239,
"gap": 10.356755827497864
},
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow"
},
{
"type": "arrow",
"version": 411,
"versionNonce": 1284388617,
"isDeleted": false,
"id": "sEwrU6XV6gh159-hWjLUH",
"fillStyle": "solid",
"strokeWidth": 4,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"angle": 3.1285814082294436,
"x": 1766.6437336651911,
"y": 2076.4898590470993,
"strokeColor": "#000000",
"backgroundColor": "transparent",
"width": 442.0273496167363,
"height": 150.2058754407422,
"seed": 372313545,
"groupIds": [],
"strokeSharpness": "round",
"boundElements": [],
"updated": 1667216990531,
"link": null,
"locked": false,
"startBinding": {
"elementId": "ZnMeue0q7LLKTiWs1YIlQ",
"focus": -0.1611498222365172,
"gap": 11.379014759210804
},
"endBinding": {
"elementId": "KjFY0EsVB4gtcK0TyPHCT",
"focus": 0.005831788722677658,
"gap": 17.938912827562035
},
"lastCommittedPoint": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"points": [
[
0,
0
],
[
39.94355058731276,
-148.26536907462534
],
[
406.2865674807874,
-150.2058754407422
],
[
442.0273496167363,
-10.489417080347465
]
]
}
],
"appState": {
"gridSize": null,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

@ -0,0 +1,30 @@
Brief thoughts on my personal research strategy
===============================================
Here are a few estimation related things that I can be doing:
1. In-house longtermist estimation: I estimate the value of speculative projects, organizations, etc.
2. Improving marginal efficiency: I advise groups making specific decisions on how to better maximize expected value.
3. *Building up estimation capacity*: I train more people, popularize or create tooling, create templates and acquire and communicate estimation know-how, and make it so that we can "estimate all the things".
Now, within the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/) I have been kind of trying to do something like this:
<img src="https://i.imgur.com/jvSUjWX.png" class="img-frontpage-center">
That is, I have in theory been trying to directly aim for the jugular of growing evaluation capacity directly. I believe that this is valuable because once that capacity exists, it can be applied to estimate the many things for which estimation is currently unfeasible. However, although I buy the argument in the abstract, I have been finding that a bit demotivating. Instead, I would like to be doing something like:
![](https://imgur.com/IpISs4h.png)
I think I have the strong intuition that producing value in between scaling produces feedback that is valuable and that otherwise can't be accessed just by aiming for scaling. As a result, I have been trying to do things which also prove valuable in the meantime. This might have been to the slight irritation of my boss, who believes more in going for the yugular directly. Either way, I also think I will experience some tranquility from being more intentional about this piece of strategy.
In emotional terms, things that are aiming solely for scaling—like predicting when [a long list of mathematical theorems will be solved](https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics)—feel "dry", "unengaging", "a drag", "disconnected", or other such emotional descriptors.
I can think of various things that could change my mind and my intuitions about this topic, such as:
- Past examples of people successfully aiming for an abstract goal and successfully delivering
- An abstract reason or intuition why going for scaling is such that it's worth skipping feedback loops
- etc.
<p><section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

@ -0,0 +1,101 @@
Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes.
==============
**tl;dr**: [Metaforecast](https://metaforecast.org/) is a search engine and an associated repository for forecasting questions. Since our [last update](https://metaforecast.substack.com/p/metaforecast-update-better-search), we have added a GraphQL API, charts, and dashboards. We have also reworked our infrastructure to make it more stable. 
## New API
Our most significant new addition is our GraphQL API. It allows other people to build on top of our efforts. It can be accessed on [metaforecast.org/api/graphql](https://metaforecast.org/api/graphql), and looks similar to the EA Forum's [own graphql api](https://forum.effectivealtruism.org/graphiql).<p><img src="https://i.imgur.com/xHRBMNb.png" class='.img-medium-center'></p>
To get the first 1000 questions, you could use a query like: 
```
{
 questions(first: 1000) {
   edges {
     node {
       id
       title
       url
       description
       options {
         name
         probability
       }
       qualityIndicators {
         numForecasts
         stars
       }
       timestamp
     }
   }
   pageInfo {
     endCursor
     startCursor
   }
 }
}
```
You can find more examples, like code to download all questions, in our [/scripts](https://github.com/quantified-uncertainty/metaforecast/tree/master/scripts) folder, to which we welcome contributions.
## Charts and question pages.
Charts display a question's history. They look as follows:
<img src="https://i.imgur.com/MWDA1j7.png" class='.img-medium-center'>
Charts can be accessed by clicking the expand button on the front page although they are fairly slow to load at the moment.
<img src="https://i.imgur.com/JJCrUjn.png" class='.img-medium-center'>
Clicking on the expand button brings the user to a question page, which contains a chart, the full question description, and a range of quality indicators:
<img src="https://i.imgur.com/tlsVqz1.png" class='.img-medium-center'>
We are also providing an endpoint at _metaforecast.org/questions/embed/\[id\]_ to allow other pages to embed our charts. For instance, to embed a question whose id is _betfair-1.178163916_, the endpoint would be [here](https://metaforecast.org/questions/embed/betfair-1.178163916). One would use it in the following code: 
```
<iframe
src="https://metaforecast.org/questions/embed/betfair-1.178163916"
height="200"
width="300"
title="Metaforecast question"
></iframe>
```
You can find the necessary question id by clicking a toggle under "advanced options" on the frontpage, or simply by noticing the id in our URL when expanding the question.
With time, we aim to improve these pages, make them more interactive, etc. We also think it would be a good idea to embed Metaforecast questions and dashboards into the EA Forum, and we are trying to collaborate with the [Manifold team](https://github.com/ForumMagnum/ForumMagnum/pull/6015), who have [done this before](https://github.com/ForumMagnum/ForumMagnum/pull/4907), to make that happen. 
## Dashboards
Dashboards are collections of questions. For instance, [here](https://metaforecast.org/dashboards/view/561472e0d2?numCols=2) is a dashboard on global markets and inflation, as embedded in [Global Guessing](https://globalguessing.com/russia-ukraine-forecasts/).
<img src="https://i.imgur.com/Joid0LI.png" class='.img-medium-center'>
Like questions, you can either [view dashboards directly](http://metaforecast.org/dashboards/view/561472e0d2?numCols=2), or [embed](http://metaforecast.org/dashboards/embed/561472e0d2?numCols=2) them. You can also create them, at [https://metaforecast.org/dashboards](https://metaforecast.org/dashboards).
## Better infrastructure
We have also revamped our infrastructure. We moved to from JavaScript to Typescript, from MongoDB to Postgres, and simplified our backend. 
## We are open to collaborations
We are very much open to collaborations. If you want to integrate Metaforecast into your project and need help do not hesitate to reach out, e.g., on our [Github](https://github.com/quantified-uncertainty/metaforecast/issues). 
Metaforecast is also open source, and we welcome contributions. You can see some to-dos [here](https://github.com/quantified-uncertainty/metaforecast#to-do). Developing is going more slowly now because it's mostly driven by Nuño working in his spare time, so contributions would be counterfactual. 
## Acknowledgements
<p><img src="https://i.imgur.com/7yuRrge.png" class="img-frontpage-center"></p>
Metaforecast is hosted by the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/), and has received funding from [Astral Codex Ten](https://astralcodexten.substack.com/p/acx-grants-results). It has received significant contributions from [Vyacheslav Matyuhin](https://berekuk.ru/), who was responsible for the upgrade to Typescript and GraphQL. Thanks to Clay Graubard of [Global Guessing](https://globalguessing.com/) for their comments and dashboards, to Insight Prediction for help smoothing out their API, to Nathan Young for general comments, to others for their comments and suggestions.
---
<p><section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

@ -0,0 +1,232 @@
Tracking the money flows in forecasting
=======================================
This list of forecasting organizations includes:
- A brief description of each organization
- A monetary estimate of value. This can serve as a rough but hard-to-fake proxy of value. Sometimes this is a flow (e.g., budget per year), and sometimes this is an estimate of total value (e.g., valuation).
- A more subjective, rough, and verbal estimate of how much value the organization produces. <p><figure><img src="https://i.imgur.com/gqCTHMq.png" class="img-frontpage-center"><br><figcaption>DALLE: "crystal ball surrounded by money, photorealistic"</figcaption></figure></p>
This started as a breadth first evaluation of the forecasting system, and to some extent it still is, i.e., it might be useful to get a rough sense of the ecosystem as a whole. After some discussion on whether very rough evaluations [are worth it](https://nunosempere.com/blog/2022/10/27/are-flimsy-evaluations-worth-it/) ([a](http://web.archive.org/web/20221031125900/https://nunosempere.com/blog/2022/10/27/are-flimsy-evaluations-worth-it/)), people who prefer their evaluations to have a high threshold of quality and polish might want to either ignore this post or just pay attention at the monetary estimates.
## Larger entities
### Flutter Entertainment
***What it is***: [Flutter Entertainment](https://en.wikipedia.org/wiki/Flutter_Entertainment) ([a](http://web.archive.org/web/20221013031739/https://en.wikipedia.org/wiki/Flutter_Entertainment)) is one of the largest gambling companies. It was created from the merger of [Betfair](https://en.wikipedia.org/wiki/Betfair) ([a](http://web.archive.org/web/20220913120222/https://en.wikipedia.org/wiki/Betfair)) and [Paddy Power](https://en.wikipedia.org/wiki/Paddy_Power) ([a](http://web.archive.org/web/20220809000831/https://en.wikipedia.org/wiki/Paddy_Power)). It has a large presence in the EU and UK.
***Monetary value (market cap)***: It has a ~$20B (~£18B) market cap.
***Social value***: Betfair and Paddy Power have some markets with public utilities, e.g., around elections, but focus mostly on sports. However, its social value is otherwise low or negative---e.g., by enabling or creating gambling addicts.
***Note***: There are many other betting companies, e.g., Draft Kings, which I will not review here because they tend to focus mostly on sports, though I will note that they tend to have a high market capitalization.
### Kalshi
***What it is***: Kalshi is a US company which aims to provide [prediction markets](https://en.wikipedia.org/wiki/Prediction_market) ([a](http://web.archive.org/web/20221028173838/https://en.wikipedia.org/wiki/Prediction_market)) to US consumers while complying and shaping US regulations on this topic.
***Monetary value (VC funding, valuation)***: [Kalshi](https://kalshi.com/) ([a](http://web.archive.org/web/20221101033945/https://kalshi.com/)) has received ~$30M in VC [funding](https://www.crunchbase.com/organization/kalshi/company_financials). My sense is that it's probably worth much more than that ($100M+) due to being the first prediction market regulated by the CFTC. Previous markets---like PredictIt or Iowa Electronic Markets---were only operating with a no-action letter.
***Social value***: There has been a bit of drama around Kalshi recently. In recent times, the CFTC withdrew its letter of no action from PredictIt, and fined and banned Polymarket from operating in the US. These are two of Kalshi's competitors, and my sense is that Kalshi probably contributed to these events. As such, I'd estimate Kalshi's impact so far to probably be negative, on account of these anti-competitive practices. In the future, however, if Kalshi contributes to the adoption of prediction markets in the US, and this leads to better decision-making, it could end up having a large positive value. It is also possible that their contracts have value as hedging instruments.
### Metaculus
***What it is***: [Metaculus](https://www.metaculus.com/) is a forecasting site and a public benefit corporation that spearheads forecasting initiatives.
***Monetary value (grant flow)***: Recently, Open Philanthropy granted them [$5.5M](https://www.openphilanthropy.org/grants/?q=Metaculus) ([a](https://web.archive.org/web/20221106215746/https://www.openphilanthropy.org/grants/?q=Metaculus)). With a 5% discount rate, and asuming the $5.5M grant is for 2 years will be indefinitely renewed, this makes[^1] Metaculus' discounted cashflow be worth $55M. Metaculus previously received [$308k](https://funds.effectivealtruism.org/funds/payouts/may-august-2021-ea-infrastructure-fund-grants) ([a](http://web.archive.org/web/20220804053231/https://funds.effectivealtruism.org/funds/payouts/may-august-2021-ea-infrastructure-fund-grants)) from the EA infrastructure Fund in 2021 and [$65k](https://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendations) ([a](http://web.archive.org/web/20220806101148/https://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendations)) from the EA Long-term Future Fund in 2020.
[^1]: This form of discounted cash flow might be useful for some very rough comparison between for-profits and not-for-profits. But they really are not the same.
***Social value***: With its $5.5M Open Philanthropy grant, Metaculus has been growing its already [12-person-strong team](https://www.metaculus.com/about/) ([a](http://web.archive.org/web/20220925082358/https://www.metaculus.com/about/)), and is hiring for [a number of positions](https://apply.workable.com/metaculus/) ([a](http://web.archive.org/web/20221105123354/https://apply.workable.com/metaculus/)). So it's one of the EA forecasting organizations most able to scale and deploy human capital in terms of sheer number. The community which makes up its forecasting site is also reasonably strong.
### Polymarket
***What it is:*** Polymarket is a crypto prediction market. It combined various technologies: [magic.link](https://magic.link/), [Polygon](https://polygon.technology/) ([a](http://web.archive.org/web/20221106192328/https://polygon.technology/)) and the [Gnosis conditional contract](https://blog.gnosis.pm/omen-and-the-next-generation-of-prediction-markets-2e7a2dd604e) in order to build a usable crypto prediction market at a time where the previous iteration, Augur, had become unusable in practice due to high fees on the Ethereum blockchain.
***Monetary value (VC funding)***: It has received [$4M in VC funding](https://www.crunchbase.com/organization/polymarket) ([a](http://web.archive.org/web/20220428173056/https://www.crunchbase.com/organization/polymarket)), according to Crunchbase. It's unclear to me what its valuation is.
***Social value***: Polymarket has had a range of public interest markets, on politics, COVID infection numbers, or the Russian invasion of Ukraine.
### Cultivate Labs
***What it is***: Cultivate Labs is a company which offers infrastructure for forecasting tournaments. They currently host Good Judgment Open, INFER, the Cosmic Bazaar, and a Czech Prediction platform---and probably a few more platforms that I don't know of.
***Monetary value (valuation)***: I'm pretty uncertain about how much they are worth, and I'd give a guess of $5M to $80M.
***Social value***: I used to view them as a bit clunky and outdated, but I've come to respect that they are one of the only few groups willing to offer a full package, with support for multiple years.
### Good Judgment
***What it is***: Good Judgment" I mean the set of organizations which administer the public prediction market tournament [GJOpen](https://www.gjopen.com/) ([a](http://web.archive.org/web/20221101071604/https://www.gjopen.com/)), and provide forecasts and training through the [Good Judgment brand](https://goodjudgment.io/) ([a](http://web.archive.org/web/20221021191540/https://goodjudgment.io/))
***Monetary value (valuation)***: I'm fairly uncertain how much they would be worth. Maybe a very wide interval would be $3M to $50M.
***Social value***: Good Judgment has helped build awareness of superforecasting practices. Their forecasts provided to their subscribers have probably influenced the decisions of their readers and of those who commissioned them. Their training at government and corporations probably marginally increased the decision quality of clients, but it's unclear to me to what extent and for how long.
### Manifold Markets
***What it is:*** Manifold Market is a play-money prediction market site with a really nice user experience.
***Monetary value (VC funding, valuation):*** Manifold Market has received $2M in funding, ([$2M in seed funding](https://manifold.markets/ManifoldMarkets/will-manifold-raise-2m-in-seed-fund) ([a](http://web.archive.org/web/20221031122017/https://manifold.markets/ManifoldMarkets/will-manifold-raise-2m-in-seed-fund)), of which $[1M from the FTX Future Fund's regrantor program](https://ftxfuturefund.org/our-regrants/?_search=manifold%20markets) ([a](https://web.archive.org/web/20221106215835/https://ftxfuturefund.org/our-regrants/?_search=manifold%20markets))), at a $15M valuation. They have a superb software engineering team, and one of the fastest development speeds, but on the other hand their user acquisition has been a bit [slow](https://manifold.markets/stats) ([a](http://web.archive.org/web/20221031003326/https://manifold.markets/stats)). My sense is that their valuation would be somewhere between $10M and $50M
***Social value:*** So far, by making an eminently usable site, they allow users to signal their beliefs and challenge others to do the same. They have also hosted a few tournaments, and at least the [Clearer Thinking tournament](https://manifold.markets/tournaments) ([a](https://web.archive.org/web/20221106215934/https://manifold.markets/tournaments)) was probably decision-relevant. But their value is linked to their future growth.
### Insight Prediction
***What it is***: [Insight Prediction](https://insightprediction.com/) ([a](http://web.archive.org/web/20221101064147/https://insightprediction.com/)) is a new prediction market which accepts crypto.
***Monetary value (valuation)***: This would depend on their usage numbers, which aren't public. I'd say it's worth at least $5M, but whether the upper bound is $50M or $500M probably depends on the team's execution over the coming year.
***Social value***: It is one of the few prediction sites which hosts real-money somewhat liquid markets on the Ukraine-Russia conflict.
### Tetlock's research group
***What it is***: Tetlock's research group is a loose collaboration of academics working in collaboration with [Phil Tetlock](https://scholar.google.com/citations?user=CJjf6H0AAAAJhttps://scholar.google.com/citations?user=CJjf6H0AAAAJ&hl=en&oi=ao) ([a](https://web.archive.org/web/20221106220007/https://scholar.google.com/citations?user=CJjf6H0AAAAJhttps://scholar.google.com/citations?user=CJjf6H0AAAAJ&hl=en&oi=ao))---author of the ***Superforecasting*** book.
***Monetary value (valuation, grant flow)***: They have received at least $2M from Open Philanthropy ([1](https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlocks-making-conversations-smarter-faster-forecasting-project/) ([a](http://web.archive.org/web/20221017005338/https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlocks-making-conversations-smarter-faster-forecasting-project/)), [2](https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlock-on-forecasting/) ([a](http://web.archive.org/web/20221017005338/https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlock-on-forecasting/))), and probably more than that through the University of Pennsylvania. It's unclear to me how much to value a research group, but I'd give a ballpark of $1M to $20M.
***Social value***: Tetlock's work on expert political judgment and on superforecasting was seminal work that kick-started the current forecasting community. In [recent times](https://scholar.google.com/citations?hl=en&user=CJjf6H0AAAAJ&view_op=list_works&sortby=pubdate), it seems likely that work stemming from their [work on estimating existential risk](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4001628) ([a](http://web.archive.org/web/20220812234335/https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4001628)) could be action-guiding for large organizations and governmental bodies, but most of the output from that research direction hasn't been published yet.
### PredictIt
***What it is***: PredictIt is a US-based prediction market. Historically, a community of sharp prediction market players coalesced around it. But in recent times, the CFTC has withdrawn its permission for it to operate in the US.
***Monetary value (valuation)***: Insight Prediction gives it only an [11% chance of it surviving](https://insightprediction.com/m/18505/will-predictit-survive) ([a](http://web.archive.org/web/20221005064000/https://insightprediction.com/m/18505/will-predictit-survive)). My guess is that it was worth $5M to $50M when alive, implying a $0.5M to $5M valuation right now.
***Social value***: PredictIt was fairly instrumental in the creation of a passionate if sometimes a bit cut-throat prediction market community. Its future value depends on its survival.
### Epoch
***What it is***: [Epoch](https://epochai.org/) ([a](http://web.archive.org/web/20221013183722/https://epochai.org/)) is a research team investigating AI progress.
***Monetary value (grant flow):*** They have received around [$2M](https://www.openphilanthropy.org/grants/epoch-general-support/) ([a](http://web.archive.org/web/20220922190346/https://www.openphilanthropy.org/grants/epoch-general-support/)) from Open Philanthropy.
***Social value***: My sense is that their research has usefully informed some OpenPhilanthropy decisions. Open Philanthropy donated around $80M in 2021 to the "Potential Risks from Advanced AI" cause area. A 5% improvement in that decision-making would imply a value of $4M/year.
### Swift Centre For Applied Forecasting
***What it is***: The [Swift Centre](https://www.swiftcentre.org/) ([a](http://web.archive.org/web/20221103141937/https://www.swiftcentre.org/)) publishes publicly available forecasts from accurate forecasters on topics of importance.
***Monetary value (grant flow):*** They have received [$2M from the FTX future fund regranting program](https://ftxfuturefund.org/our-regrants/?_search=Swift) ([a](https://web.archive.org/web/20221106220034/https://ftxfuturefund.org/our-regrants/?_search=Swift)).
***Social value***: Their [forecasts](https://www.swiftcentre.org/) ([a](http://web.archive.org/web/20221103141937/https://www.swiftcentre.org/)) lead to better decisions. My sense is that the organization is still getting up to speed, and that when running at full operational capacity, they'll be producing many more predictions.
### Quantified Uncertainty Research Institute (QURI)
***What it is***: QURI is a "new initiative to advance forecasting and epistemics to improve the long-term future of humanity. We write research and make software." In particular, we are developing [Squiggle](https://squiggle-language.com/), a web-capable programming language for better estimation, and doing research around scalable forecasting and estimation.
***Monetary value (grants flow)***: Overall we have received around $780k so far ([1](https://survivalandflourishing.fund/sff-2020-h1-recommendations) ([a](http://web.archive.org/web/20220703183731/https://survivalandflourishing.fund/sff-2020-h1-recommendations)), [2](https://survivalandflourishing.fund/sff-2022-h1-recommendations) ([a](http://web.archive.org/web/20220703184038/https://survivalandflourishing.fund/sff-2022-h1-recommendations)), [3](https://survivalandflourishing.fund/sff-2022-h1-recommendations) ([a](http://web.archive.org/web/20220703184038/https://survivalandflourishing.fund/sff-2022-h1-recommendations))), maybe a bit more.
***Social value***: Most of QURI's value is coming from the value of [Squiggle](https://www.squiggle-language.com/) ([a](http://web.archive.org/web/20221012160532/https://www.squiggle-language.com/)), which people are now starting to use for cost-effectiveness estimates, and from the value of [my own research](https://forum.effectivealtruism.org/users/nunosempere) ([a](http://web.archive.org/web/20221106162329/https://forum.effectivealtruism.org/users/nunosempere)), which might move the Effective Altruism movement more in the direction of using quantified estimates even for speculative stuff.
***Note***: This is the organization that I work for.
### Samotsvety Forecasting
***What it is***: [Samotsvety Forecasting](https://samotsvety.org/) ([a](http://web.archive.org/web/20221025004219/https://samotsvety.org/)) is a somewhat ad-hoc group of highly accurate forecasters of which I am a member.
***Monetary value***: My sense is that we could find some funder to sacrifice $200k to $5M if our group's existence depended on it. But I haven't actually tried that!
***Social value:*** We've published a variety of public forecasts, and we find ourselves more and more to be a part of Effective Altruism's general epistemic infrastructure.
### Czech Priorities
***What it is***: [Czech Priorities](https://www.ceskepriority.cz/) ([a](http://web.archive.org/web/20221102103806/https://www.ceskepriority.cz/)) is a Czech group working to advance forecasting within the Czech Republic.
***Monetary value:*** I couldn't find grantmaking information from a quick online search.
***Social value***: It's possible that they could find generalizable lessons about using forecasting to influence policy and improve decision-making that could then be extended to other countries. But this is not certain.
### Hedgehog Markets
***What it is***: [Hedghog Markets](https://hedgehog.markets/) ([a](http://web.archive.org/web/20221010202946/https://hedgehog.markets/)) is a crypto-prediction market
***Monetary value***: They raised [$3.5M in funding](https://www.crunchbase.com/organization/hedgehog-markets) back in the crypto bull days.
***Social value***: They have been particularly innovative around no-loss markets, and peer-to-peer markets. But value depends on future usage volume.
### INFER
***What it is***: INFER is a prediction market platform which seeks to influence US government policy around transformative technologies.
***Monetary value***: INFER is hosted at [ARLIS](https://www.arlis.umd.edu/home) ([a](http://web.archive.org/web/20221013111200/https://www.arlis.umd.edu/home)), at the University of Maryland, which received an ~[$8M grant from Open Philanthropy specifically for forecasting](https://www.openphilanthropy.org/grants/applied-research-laboratory-for-intelligence-and-security-forecasting-platforms/) ([a](http://web.archive.org/web/20220808081108/https://www.openphilanthropy.org/grants/applied-research-laboratory-for-intelligence-and-security-forecasting-platforms/)) (over two years, over two platforms), which I find very confusing because I don't think they have deployed much of that capital yet.
***Social value***: INFER aims to influence decisions by the US government around technology policy. To the extent that this is the case, it could be valuable indeed, but this is hard to evaluate from the outside.
## Smaller forecasting entities
### Nathan Young
[Nathan Young](https://twitter.com/NathanpmYoung) is a smallish forecasting influencer followed by 6k people on Twitter. He recently got [$182k from the FTX Future Fund regranting program](https://ftxfuturefund.org/our-grants/?_search=Nathan%20Young) ([a](https://web.archive.org/web/20221106220135/https://ftxfuturefund.org/our-grants/?_search=Nathan%20Young)) to build a platform for forecasting question generation, an alpha version of which can be seen [here](https://doubtful.app/) ([a](http://web.archive.org/web/20221031061542/https://doubtful.app/)).
### Sage
Sage is an organization dedicated to forecasting tooling and research. It got $700k from the FTX Future Fund, and has built the tools on [quantifiedintuitions.org](https://www.quantifiedintuitions.org/pastcasting) ([a](http://web.archive.org/web/20221013134558/https://www.quantifiedintuitions.org/pastcasting)). But it hasn't deployed most of its capital yet.
### Global Guessing
[Global Guessing](https://globalguessing.com/) ([a](http://web.archive.org/web/20221031010912/https://globalguessing.com/)) is a forecasting site which made and gathered forecasts early on the Russian invasion of Ukraine, but which seems pretty dormant since then. They have received [~$330k from the FTX Future Fund.](https://ftxfuturefund.org/our-regrants/?_search=Global%20Guessing) ([a](https://web.archive.org/web/20221106220157/https://ftxfuturefund.org/our-regrants/?_search=Global%20Guessing))
### Social Science Prediction Platform
The [Social Science Prediction Platform](https://socialscienceprediction.org/) ([a](http://web.archive.org/web/20221013155854/https://socialscienceprediction.org/)) collects forecasts from academics on upcoming papers. This can have a range of public benefits, such as estimates of how surprising results in the social sciences are. Personally I'd be hopeful about the ability to construct a more legible or objective Bayesian prior. I could quickly find that they have received [$346k](https://survivalandflourishing.fund/sff-2022-h1-recommendations) ([a](http://web.archive.org/web/20220703184038/https://survivalandflourishing.fund/sff-2022-h1-recommendations)) from the Survival and Flourishing Fund.
### Augur
[Augur](https://augur.net/) was a pioneering prediction market way back when, but isn't up to much these days. Somehow the implied [market cap](https://coinmarketcap.com/currencies/augur/) ([a](http://web.archive.org/web/20220619100141/https://coinmarketcap.com/currencies/augur/)) of their coin is still $74M (## ?!?).
### Confido
[Confido](https://confido.tools/) ([a](http://web.archive.org/web/20220711161543/https://confido.tools/)) is a three-person strong Czech organization which develops forecasting tooling. They received $190k from the [FTX Future Fund regranting program](https://ftxfuturefund.org/our-regrants/?_search=Confido) ([a](https://web.archive.org/web/20221106220218/https://ftxfuturefund.org/our-regrants/?_search=Confido)).
### Hypermind
[Hypermind ](https://www.hypermind.com/en/) ([a](http://web.archive.org/web/20221015015526/https://www.hypermind.com/en/))is a French forecasting organization which hosts some low-stakes markets with play money but real-money rewards.
### Replication Markets
[Replication Markets](https://replicationmarkets.com/) ([a](http://web.archive.org/web/20220930085636/https://replicationmarkets.com/)) was a super-interesting experiment in having forecasters predict the outcome of replications. It has now ended, and it's uncertain whether the creators will follow up with something as interesting. They had ~$150k in forecaster rewards.
### PredictionBook
[PredictionBook ](https://predictionbook.com/) ([a](http://web.archive.org/web/20221027154039/https://predictionbook.com/))is an old, old site used by people to keep track of their probabilities, with no rewards involved. Gwern still [uses it](https://predictionbook.com/users/gwern) ([a](http://web.archive.org/web/20221019042049/https://predictionbook.com/users/gwern)).
### Gnosis
Gnosis is much like Augur, but less successful. They also have a [GnosisDAO](https://gnosis.io/gnosisdao/) ([a](http://web.archive.org/web/20220825162029/https://gnosis.io/gnosisdao/)). The implied market cap of their token is worth [$330M](https://gnosis.io/gnosisdao/) ([a](http://web.archive.org/web/20220825162029/https://gnosis.io/gnosisdao/)), based on the [Gnosis safe](https://gnosis-safe.io/) ([a](http://web.archive.org/web/20221101042815/https://gnosis-safe.io/)). But it doesn't do much forecasting these days.
## Thoughts on this ranking
### On the money flows
Money flows are interesting, but flawed as a measure of value. The spending rate or VC money raised does not correspond to the value a project creates. The high valuations for crypto projects could be mostly illusory, and for the case of Gnosis and Augur, they probably are.
Still, a monetary ranking has some advantages
- There may be some ambiguity to it, but it allows one to differentiate between &lt;$500k projects, $500k to $5M projects, and $5M to $50M projects.
- It has an honesty to it, in that money is a somewhat hard-to-fake signal
- It is a fast proxy to estimate
### On the estimate of social value
The estimate of social value isn't a quantified estimate, but rather a few lines of observations, and maybe a path to impact. I think I could translate this into a numerical relative value scale, but it would take a fair amount of time.
### On the different pathways to impact
I'm seeing mainly three pathways of impact for forecasting projects:
1. Providing consumer value, e.g., betting markets or hedging utilities that people want to use
2. Providing informative markets about events of interest, which can apply pressure to decision-makers inside governments and corporations to make better decisions
3. Providing tooling that people making decisions can make and incorporate as part of their decision-making
I'm mostly bullish on the third option, but this deserves more elaboration.
### On how this estimate could be improved
This estimate could be improved by having numerical estiamates of impact. This would allow for comparisons around efficiency &c. It could also be presented better, i.e., in a table, or more creatively, in a map.
<p><section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>
</article>

@ -0,0 +1,270 @@
Tracking the money flows in forecasting
==============
This list of forecasting organizations includes:
* A brief description of each organization
* A monetary estimate of value. This can serve as a rough but hard-to-fake proxy of value. Sometimes this is a flow (e.g., budget per year), and sometimes this is an estimate of total value (e.g., valuation).
* A more subjective, rough, and verbal estimate of how much value the organization produces. <p><figure><img src="https://i.imgur.com/gqCTHMq.png" class="img-frontpage-center"><br><figcaption>DALLE: "crystal ball surrounded by money, photorealistic"</figcaption></figure></p>
This started as a breadth first evaluation of the forecasting system, and to some extent it still is, i.e., it might be useful to get a rough sense of the ecosystem as a whole. After some discussion on whether very rough evaluations [are worth it](https://nunosempere.com/blog/2022/10/27/are-flimsy-evaluations-worth-it/) ([a](http://web.archive.org/web/20221031125900/https://nunosempere.com/blog/2022/10/27/are-flimsy-evaluations-worth-it/)), people who prefer their evaluations to have a high threshold of quality and polish might want to either ignore this post or just pay attention to the monetary estimates.
## Summary table
Note that the monetary value column has different types of estimates. 
| Name | Monetary value | Purpose |
|-------------------------------------------|-----------------------------------------|----------------------------------------------------|
| Flutter Entertainment | ~$20B (~£18B) market cap | Gambling |
| Kalshi | ~$30M in VC funding | Prediction market |
| Metaculus | ~$6M in grants | Forecasting site |
| Polymarket | ~$4M in VC funding | Prediction market |
| Cultivate Labs | $5M to $80M (estimated valuation) | Forecasting platform as a service |
| Good Judgment | $3M to $50M (estimated valuation) | Forecasting consulting |
| Manifold Markets | ~$2M in early stage funding | Play money prediction market |
| Insight Prediction | — | Prediction market |
| Tetlocks research group | $1M to $20M (estimated grant flow) | Research group |
| PredictIt | $0.5M to $5M (estimated value) | Prediction market |
| Epoch | $2M in grants | Research group on AI progress |
| Swift Centre | $2M in grants | Forecasts as a public good |
| Quantified Uncertainty Research Institute | $780k in grants | Software and forecasting research |
| Samotsvety Forecasting | $200k to $5M (estimated valuation) | Forecasts as a public good, forecasting consulting |
| Czech Priorities | — | Institutional decision-making using forecasting |
| Hedgehog Markets | $3.5M in VC funding | Prediction market (crypto) |
| INFER | Uncertain. Estimated $2M/year in grants | Forecasting platform |
| Nathan Young | $180k in grants | Forecasting question creation |
| Sage | $700k in grants | Forecasting research |
| Global Guessing | $330k in grants | Forecasting journalism |
| Social Science Prediction Platform | At least $838k in grants | Forecasting in an academic context |
| Augur | $60M crypto market cap | Prediction market |
| Confido | $190k in grants | Forecasting tooling |
| Hypermind | — | Play money prediction market |
| Replication Markets | $150k in forecaster rewards | Forecasting experiment |
| PredictionBook | — | Prediction database |
| Gnosis | $230M crypto market cap | Smart contracts, including for prediction markets |
## Larger entities
### Flutter Entertainment
_**What it is**_: [Flutter Entertainment](https://en.wikipedia.org/wiki/Flutter_Entertainment) ([a](http://web.archive.org/web/20221013031739/https://en.wikipedia.org/wiki/Flutter_Entertainment)) is one of the largest gambling companies. It was created from the merger of [Betfair](https://en.wikipedia.org/wiki/Betfair) ([a](http://web.archive.org/web/20220913120222/https://en.wikipedia.org/wiki/Betfair)) and [Paddy Power](https://en.wikipedia.org/wiki/Paddy_Power) ([a](http://web.archive.org/web/20220809000831/https://en.wikipedia.org/wiki/Paddy_Power)). It has a large presence in the EU and UK.
_**Monetary value (market cap)**_: It has a ~$20B (~£18B) market cap.
_**Social value**_: Betfair and Paddy Power have some markets with public utilities, e.g., around elections, but focus mostly on sports. However, its social value is otherwise low or negative—e.g., by enabling or creating gambling addicts.
_**Note**_: There are many other betting companies, e.g., Draft Kings, which I will not review here because they tend to focus mostly on sports, though I will note that they tend to have a high market capitalization.
### Kalshi
_**What it is**_: Kalshi is a US company which aims to provide [prediction markets](https://en.wikipedia.org/wiki/Prediction_market) ([a](http://web.archive.org/web/20221028173838/https://en.wikipedia.org/wiki/Prediction_market)) to US consumers while complying and shaping US regulations on this topic.
_**Monetary value (VC funding, valuation)**_: [Kalshi](https://kalshi.com/) ([a](http://web.archive.org/web/20221101033945/https://kalshi.com/)) has received ~$30M in VC [funding](https://www.crunchbase.com/organization/kalshi/company_financials). My sense is that its probably worth much more than that ($100M+) due to being the first prediction market regulated by the CFTC. Previous markets—like PredictIt or Iowa Electronic Markets—were only operating with a no-action letter.
_**Social value**_: There has been a bit of drama around Kalshi recently. In recent times, the CFTC withdrew its letter of no action from PredictIt, and fined and banned Polymarket from operating in the US. These are two of Kalshis competitors, and my sense is that Kalshi probably contributed to these events. As such, Id estimate Kalshis impact so far to probably be negative, on account of these anti-competitive practices. In the future, however, if Kalshi contributes to the adoption of prediction markets in the US, and this leads to better decision-making, it could end up having a large positive value. It is also possible that their contracts have value as hedging instruments.
### Metaculus
_**What it is**_: [Metaculus](https://www.metaculus.com/) is a forecasting site and a public benefit corporation that spearheads forecasting initiatives.
_**Monetary value (grant flow)**_: Recently, Open Philanthropy granted them [$5.5M](https://www.openphilanthropy.org/grants/?q=Metaculus) ([a](https://web.archive.org/web/20221106215746/https://www.openphilanthropy.org/grants/?q=Metaculus)). With a 5% discount rate, and asuming the $5.5M grant is for 2 years will be indefinitely renewed, this makes[^1] Metaculus' discounted cashflow be worth $55M. Metaculus previously received [$308k](https://funds.effectivealtruism.org/funds/payouts/may-august-2021-ea-infrastructure-fund-grants) ([a](http://web.archive.org/web/20220804053231/https://funds.effectivealtruism.org/funds/payouts/may-august-2021-ea-infrastructure-fund-grants)) from the EA infrastructure Fund in 2021 and [$65k](https://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendations) ([a](http://web.archive.org/web/20220806101148/https://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendations)) from the EA Long-term Future Fund in 2020.
[^1]: This form of discounted cash flow might be useful for some very rough comparison between for-profits and not-for-profits. But they really are not the same.
_**Social value**_: With its $5.5M Open Philanthropy grant, Metaculus has been growing its already [12-person-strong team](https://www.metaculus.com/about/) ([a](http://web.archive.org/web/20220925082358/https://www.metaculus.com/about/)), and is hiring for [a number of positions](https://apply.workable.com/metaculus/) ([a](http://web.archive.org/web/20221105123354/https://apply.workable.com/metaculus/)). So its one of the EA forecasting organizations most able to scale and deploy human capital in terms of sheer number. The community which makes up its forecasting site is also reasonably strong.
### Polymarket
_**What it is:**_ Polymarket is a crypto prediction market. It combined various technologies: [magic.link](https://magic.link/), [Polygon](https://polygon.technology/) ([a](http://web.archive.org/web/20221106192328/https://polygon.technology/)) and the [Gnosis conditional contract](https://blog.gnosis.pm/omen-and-the-next-generation-of-prediction-markets-2e7a2dd604e) in order to build a usable crypto prediction market at a time where the previous iteration, Augur, had become unusable in practice due to high fees on the Ethereum blockchain.
_**Monetary value (VC funding)**_: It has received [$4M in VC funding](https://www.crunchbase.com/organization/polymarket) ([a](http://web.archive.org/web/20220428173056/https://www.crunchbase.com/organization/polymarket)), according to Crunchbase. Its unclear to me what its valuation is.
_**Social value**_: Polymarket has had a range of public interest markets, on politics, COVID infection numbers, or the Russian invasion of Ukraine.
### Cultivate Labs
_**What it is**_: Cultivate Labs is a company which offers infrastructure for forecasting tournaments. They currently host Good Judgment Open, INFER, the Cosmic Bazaar, and a Czech Prediction platform—and probably a few more platforms that I dont know of.
_**Monetary value (valuation)**_: Im pretty uncertain about how much they are worth, and Id give a guess of $5M to $80M.
_**Social value**_: I used to view them as a bit clunky and outdated, but Ive come to respect that they are one of the only few groups willing to offer a full package, with support for multiple years.
### Good Judgment
_**What it is**_: Good Judgment" I mean the set of organizations which administer the public prediction market tournament [GJOpen](https://www.gjopen.com/) ([a](http://web.archive.org/web/20221101071604/https://www.gjopen.com/)), and provide forecasts and training through the [Good Judgment brand](https://goodjudgment.io/) ([a](http://web.archive.org/web/20221021191540/https://goodjudgment.io/))
_**Monetary value (valuation)**_: Im fairly uncertain how much they would be worth. Maybe a very wide interval would be $3M to $50M.
_**Social value**_: Good Judgment has helped build awareness of superforecasting practices. Their forecasts provided to their subscribers have probably influenced the decisions of their readers and of those who commissioned them. Their training at government and corporations probably marginally increased the decision quality of clients, but its unclear to me to what extent and for how long.
### Manifold Markets
_**What it is:**_ Manifold Market is a play-money prediction market site with a really nice user experience.
_**Monetary value (VC funding, valuation):**_ Manifold Market has received $2M in funding, ([$2M in seed funding](https://manifold.markets/ManifoldMarkets/will-manifold-raise-2m-in-seed-fund) ([a](http://web.archive.org/web/20221031122017/https://manifold.markets/ManifoldMarkets/will-manifold-raise-2m-in-seed-fund)), of which $[1M from the FTX Future Funds regrantor program](https://ftxfuturefund.org/our-regrants/?_search=manifold%20markets) ([a](https://web.archive.org/web/20221106215835/https://ftxfuturefund.org/our-regrants/?_search=manifold%20markets))), at a $15M valuation. They have a superb software engineering team, and one of the fastest development speeds, but on the other hand their user acquisition has been a bit [slow](https://manifold.markets/stats) ([a](http://web.archive.org/web/20221031003326/https://manifold.markets/stats)). My sense is that their valuation would be somewhere between $10M and $50M
_**Social value:**_ So far, by making an eminently usable site, they allow users to signal their beliefs and challenge others to do the same. They have also hosted a few tournaments, and at least the [Clearer Thinking tournament](https://manifold.markets/tournaments) ([a](https://web.archive.org/web/20221106215934/https://manifold.markets/tournaments)) was probably decision-relevant. But their value is linked to their future growth.
### Insight Prediction
_**What it is**_: [Insight Prediction](https://insightprediction.com/) ([a](http://web.archive.org/web/20221101064147/https://insightprediction.com/)) is a new prediction market which accepts crypto.
_**Monetary value (valuation)**_: This would depend on their usage numbers, which arent public. Id say its worth at least $5M, but whether the upper bound is $50M or $500M probably depends on the teams execution over the coming year.
_**Social value**_: It is one of the few prediction sites which hosts real-money somewhat liquid markets on the Ukraine-Russia conflict.
### Tetlocks research group
_**What it is**_: Tetlocks research group is a loose collaboration of academics working in collaboration with [Phil Tetlock](https://scholar.google.com/citations?user=CJjf6H0AAAAJhttps://scholar.google.com/citations?user=CJjf6H0AAAAJ&hl=en&oi=ao) ([a](https://web.archive.org/web/20221106220007/https://scholar.google.com/citations?user=CJjf6H0AAAAJhttps://scholar.google.com/citations?user=CJjf6H0AAAAJ&hl=en&oi=ao))—author of the _**Superforecasting**_ book.
_**Monetary value (valuation, grant flow)**_: They have received at least $2M from Open Philanthropy ([1](https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlocks-making-conversations-smarter-faster-forecasting-project/) ([a](http://web.archive.org/web/20221017005338/https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlocks-making-conversations-smarter-faster-forecasting-project/)), [2](https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlock-on-forecasting/) ([a](http://web.archive.org/web/20221017005338/https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlock-on-forecasting/))), and probably more than that through the University of Pennsylvania. Its unclear to me how much to value a research group, but Id give a ballpark of $1M to $20M.
_**Social value**_: Tetlocks work on expert political judgment and on superforecasting was seminal work that kick-started the current forecasting community. In [recent times](https://scholar.google.com/citations?hl=en&user=CJjf6H0AAAAJ&view_op=list_works&sortby=pubdate), it seems likely that work stemming from their [work on estimating existential risk](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4001628) ([a](http://web.archive.org/web/20220812234335/https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4001628)) could be action-guiding for large organizations and governmental bodies, but most of the output from that research direction hasnt been published yet.
### PredictIt
_**What it is**_: PredictIt is a US-based prediction market. Historically, a community of sharp prediction market players coalesced around it. But in recent times, the CFTC has withdrawn its permission for it to operate in the US.
_**Monetary value (valuation)**_: Insight Prediction gives it only an [11% chance of it surviving](https://insightprediction.com/m/18505/will-predictit-survive) ([a](http://web.archive.org/web/20221005064000/https://insightprediction.com/m/18505/will-predictit-survive)). My guess is that it was worth $5M to $50M when alive, implying a $0.5M to $5M valuation right now.
_**Social value**_: PredictIt was fairly instrumental in the creation of a passionate if sometimes a bit cut-throat prediction market community. Its future value depends on its survival.
### Epoch
_**What it is**_: [Epoch](https://epochai.org/) ([a](http://web.archive.org/web/20221013183722/https://epochai.org/)) is a research team investigating AI progress.
_**Monetary value (grant flow):**_ They have received around [$2M](https://www.openphilanthropy.org/grants/epoch-general-support/) ([a](http://web.archive.org/web/20220922190346/https://www.openphilanthropy.org/grants/epoch-general-support/)) from Open Philanthropy.
_**Social value**_: My sense is that their research has usefully informed some OpenPhilanthropy decisions. Open Philanthropy donated around $80M in 2021 to the “Potential Risks from Advanced AI” cause area. A 5% improvement in that decision-making would imply a value of $4M/year.
### Swift Centre For Applied Forecasting
_**What it is**_: The [Swift Centre](https://www.swiftcentre.org/) ([a](http://web.archive.org/web/20221103141937/https://www.swiftcentre.org/)) publishes publicly available forecasts from accurate forecasters on topics of importance.
_**Monetary value (grant flow):**_ They have received [$2M from the FTX future fund regranting program](https://ftxfuturefund.org/our-regrants/?_search=Swift) ([a](https://web.archive.org/web/20221106220034/https://ftxfuturefund.org/our-regrants/?_search=Swift)).
_**Social value**_: Their [forecasts](https://www.swiftcentre.org/) ([a](http://web.archive.org/web/20221103141937/https://www.swiftcentre.org/)) lead to better decisions. My sense is that the organization is still getting up to speed, and that when running at full operational capacity, theyll be producing many more predictions.
### Quantified Uncertainty Research Institute (QURI)
_**What it is**_: QURI is a “new initiative to advance forecasting and epistemics to improve the long-term future of humanity. We write research and make software.” In particular, we are developing [Squiggle](https://squiggle-language.com/), a web-capable programming language for better estimation, and doing research around scalable forecasting and estimation.
_**Monetary value (grants flow)**_: Overall we have received around $780k so far ([1](https://survivalandflourishing.fund/sff-2020-h1-recommendations) ([a](http://web.archive.org/web/20220703183731/https://survivalandflourishing.fund/sff-2020-h1-recommendations)), [2](https://survivalandflourishing.fund/sff-2022-h1-recommendations) ([a](http://web.archive.org/web/20220703184038/https://survivalandflourishing.fund/sff-2022-h1-recommendations)), [3](https://survivalandflourishing.fund/sff-2022-h1-recommendations) ([a](http://web.archive.org/web/20220703184038/https://survivalandflourishing.fund/sff-2022-h1-recommendations))), maybe a bit more.
_**Social value**_: Most of QURIs value is coming from the value of [Squiggle](https://www.squiggle-language.com/) ([a](http://web.archive.org/web/20221012160532/https://www.squiggle-language.com/)), which people are now starting to use for cost-effectiveness estimates, and from the value of [my own research](https://forum.effectivealtruism.org/users/nunosempere) ([a](http://web.archive.org/web/20221106162329/https://forum.effectivealtruism.org/users/nunosempere)), which might move the Effective Altruism movement more in the direction of using quantified estimates even for speculative stuff.
_**Note**_: This is the organization that I work for.
### Samotsvety Forecasting
_**What it is**_: [Samotsvety Forecasting](https://samotsvety.org/) ([a](http://web.archive.org/web/20221025004219/https://samotsvety.org/)) is a somewhat ad-hoc group of highly accurate forecasters of which I am a member.
_**Monetary value**_: My sense is that we could find some funder to sacrifice $200k to $5M if our groups existence depended on it. But I havent actually tried that!
_**Social value:**_ Weve published a variety of public forecasts, and we find ourselves more and more to be a part of Effective Altruisms general epistemic infrastructure.
### Czech Priorities
_**What it is**_: [Czech Priorities](https://www.ceskepriority.cz/) ([a](http://web.archive.org/web/20221102103806/https://www.ceskepriority.cz/)) is a Czech group working to advance forecasting within the Czech Republic.
_**Monetary value:**_ I couldnt find grantmaking information from a quick online search.
_**Social value**_: Its possible that they could find generalizable lessons about using forecasting to influence policy and improve decision-making that could then be extended to other countries. But this is not certain.
### Hedgehog Markets
_**What it is**_: [Hedghog Markets](https://hedgehog.markets/) ([a](http://web.archive.org/web/20221010202946/https://hedgehog.markets/)) is a crypto-prediction market
_**Monetary value**_: They raised [$3.5M in funding](https://www.crunchbase.com/organization/hedgehog-markets) back in the crypto bull days.
_**Social value**_: They have been particularly innovative around no-loss markets, and peer-to-peer markets. But value depends on future usage volume.
### INFER
_**What it is**_: INFER is a forecasting platform which seeks to influence US government policy around transformative technologies.
_**Monetary value**_: INFER is hosted at [ARLIS](https://www.arlis.umd.edu/home) ([a](http://web.archive.org/web/20221013111200/https://www.arlis.umd.edu/home)), at the University of Maryland, which received an ~[$8M grant from Open Philanthropy specifically for forecasting](https://www.openphilanthropy.org/grants/applied-research-laboratory-for-intelligence-and-security-forecasting-platforms/) ([a](http://web.archive.org/web/20220808081108/https://www.openphilanthropy.org/grants/applied-research-laboratory-for-intelligence-and-security-forecasting-platforms/)) (over two years, over two platforms), which I find very confusing because I dont think they have deployed much of that capital yet.
_**Social value**_: INFER aims to influence decisions by the US government around technology policy. To the extent that this is the case, it could be valuable indeed, but this is hard to evaluate from the outside.
## Smaller forecasting entities
### Nathan Young
[Nathan Young](https://twitter.com/NathanpmYoung) is a smallish forecasting influencer followed by 6k people on Twitter. He recently got [$182k from the FTX Future Fund regranting program](https://ftxfuturefund.org/our-grants/?_search=Nathan%20Young) ([a](https://web.archive.org/web/20221106220135/https://ftxfuturefund.org/our-grants/?_search=Nathan%20Young)) to build a platform for forecasting question generation, an alpha version of which can be seen [here](https://doubtful.app/) ([a](http://web.archive.org/web/20221031061542/https://doubtful.app/)).
### Sage
Sage is an organization dedicated to forecasting tooling and research. It got $700k from the FTX Future Fund, and has built the tools on [quantifiedintuitions.org](https://www.quantifiedintuitions.org/pastcasting) ([a](http://web.archive.org/web/20221013134558/https://www.quantifiedintuitions.org/pastcasting)). But it hasnt deployed most of its capital yet.
### Global Guessing
[Global Guessing](https://globalguessing.com/) ([a](http://web.archive.org/web/20221031010912/https://globalguessing.com/)) is a forecasting site which made and gathered forecasts early on the Russian invasion of Ukraine, but which seems pretty dormant since then. They have received [~$330k from the FTX Future Fund.](https://ftxfuturefund.org/our-regrants/?_search=Global%20Guessing) ([a](https://web.archive.org/web/20221106220157/https://ftxfuturefund.org/our-regrants/?_search=Global%20Guessing))
### Social Science Prediction Platform
The [Social Science Prediction Platform](https://socialscienceprediction.org/) ([a](http://web.archive.org/web/20221013155854/https://socialscienceprediction.org/)) collects forecasts from academics on upcoming papers. This can have a range of public benefits, such as estimates of how surprising results in the social sciences are. Personally Id be hopeful about the ability to construct a more legible or objective Bayesian prior. I could quickly find that they have received [$346k](https://survivalandflourishing.fund/sff-2022-h1-recommendations) ([a](http://web.archive.org/web/20220703184038/https://survivalandflourishing.fund/sff-2022-h1-recommendations)) from the Survival and Flourishing Fund, and an additional $492k from the FTX Future Fund's [regranting program](https://ftxfuturefund.org/our-regrants/?_area_of_interest=epistemic-institutions) ([a](https://web.archive.org/web/20221109161427/https://ftxfuturefund.org/our-regrants/?_area_of_interest=epistemic-institutions))
### Augur
[Augur](https://augur.net/) was a pioneering prediction market way back when, but isnt up to much these days. Somehow the implied [market cap](https://coinmarketcap.com/currencies/augur/) ([a](http://web.archive.org/web/20220619100141/https://coinmarketcap.com/currencies/augur/)) of their coin is still ~$60M (?!?).
### Confido
[Confido](https://confido.tools/) ([a](http://web.archive.org/web/20220711161543/https://confido.tools/)) is a three-person strong Czech organization which develops forecasting tooling. They received $190k from the [FTX Future Fund regranting program](https://ftxfuturefund.org/our-regrants/?_search=Confido) ([a](https://web.archive.org/web/20221106220218/https://ftxfuturefund.org/our-regrants/?_search=Confido)).
### Hypermind
[Hypermind](https://www.hypermind.com/en/) ([a](http://web.archive.org/web/20221015015526/https://www.hypermind.com/en/))is a French forecasting organization which hosts some low-stakes markets with play money but real-money rewards.
### Replication Markets
[Replication Markets](https://replicationmarkets.com/) ([a](http://web.archive.org/web/20220930085636/https://replicationmarkets.com/)) was a super-interesting experiment in having forecasters predict the outcome of replications. It has now ended, and its uncertain whether the creators will follow up with something as interesting. They had ~$150k in forecaster rewards.
### PredictionBook
[PredictionBook](https://predictionbook.com/) ([a](http://web.archive.org/web/20221027154039/https://predictionbook.com/))is an old, old site used by people to keep track of their probabilities, with no rewards involved. Gwern still [uses it](https://predictionbook.com/users/gwern) ([a](http://web.archive.org/web/20221019042049/https://predictionbook.com/users/gwern)).
### Gnosis
Gnosis is much like Augur, but less successful. They also have a [GnosisDAO](https://gnosis.io/gnosisdao/) ([a](http://web.archive.org/web/20220825162029/https://gnosis.io/gnosisdao/)). The implied market cap of their token is worth [$230M](https://gnosis.io/gnosisdao/) ([a](http://web.archive.org/web/20220825162029/https://gnosis.io/gnosisdao/)), based on the [Gnosis safe](https://gnosis-safe.io/) ([a](http://web.archive.org/web/20221101042815/https://gnosis-safe.io/)). But it doesnt do much forecasting these days.
## Thoughts on this ranking
### On the money flows
Money flows are interesting, but flawed as a measure of value. The spending rate or VC money raised does not correspond to the value a project creates. The high valuations for crypto projects could be mostly illusory, and for the case of Gnosis and Augur, they probably are.
Still, a monetary ranking has some advantages
* There may be some ambiguity to it, but it allows one to differentiate between &lt;$500k projects, $500k to $5M projects, and $5M to $50M projects.
* It has an honesty to it, in that money is a somewhat hard-to-fake signal
* It is a fast proxy to estimate
### On the estimate of social value
The estimate of social value isnt a quantified estimate, but rather a few lines of observations, and maybe a path to impact. I think I could translate this into a numerical relative value scale, but it would take a fair amount of time.
### On the different pathways to impact
Im seeing mainly three pathways of impact for forecasting projects:
1. Providing consumer value, e.g., betting markets or hedging utilities that people want to use
2. Providing informative markets about events of interest, which can apply pressure to decision-makers inside governments and corporations to make better decisions
3. Providing tooling that people making decisions can make and incorporate as part of their decision-making
Im mostly bullish on the third option, but this deserves more elaboration.
### On how this estimate could be improved
This estimate could be improved by having numerical estiamates of impact. This would allow for comparisons around efficiency &c. It could also be presented better, i.e., in a table, or more creatively, in a map.
### Significance given FTX's current troubles
Given FTX's current troubles, it's very possible that there will be less money floating around for altruistic forecasting projects. At the same time, some of the projects FTX has donated to may  prove successful, and other funders may want to step in.
<p><section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

@ -0,0 +1,98 @@
Forecasting Newsletter for October 2022
==============
### Highlights
* Nuclear probability estimates spiked and spooked Elon Musk.
* [Council on Strategic Risks](https://councilonstrategicrisks.org/) hiring for a [full-time Strategic Foresight Senior Fellow](https://councilonstrategicrisks.org/2022/10/11/were-hiring-strategic-foresight-fellow/) @ $78k to 114k
* [Markov Chain Monte Carlo Without all the Bullshit](https://jeremykun.com/2015/04/06/markov-chain-monte-carlo-without-all-the-bullshit/): Old blog post delivers on its title.
### Index
* Prediction Markets, Forecasting Platforms &co
* Kalshi
* Manifold
* Metaculus
* Odds and Ends
* Opportunities
* Research
* Shortform
* Longform
You can sign up for this newsletter on [substack](https://forecasting.substack.com), or browse past newsletters [here](https://forecasting.substack.com/). If you have a content suggestion or want to reach out, you can leave a comment or find me on [Twitter](https://twitter.com/NunoSempere).
### Prediction Markets and Forecasting Platforms
#### Kalshi
The Bloomberg terminal now incorporates [Kalshi markets](https://nitter.it/mansourtarek_/status/1577323820607758337) ([a](https://web.archive.org/web/20221115163815/https://nitter.it/mansourtarek_/status/1577323820607758337)).
Kalshi hosted a [competition to predict congressional races](https://kalshi.com/efc) ([a](https://web.archive.org/web/20221115163857/https://kalshi.com/efc)). If someone predicts all races correctly, they get $100k, otherwise the most accurate person will receive $25k. To be clear, this is a marketing gimmick, and participants make Yes/No rather than probabilistic predictions. But I thought I'd report on it given the high amount.
#### Metaculus
As Metaculus continues to build capacity, they have started to launch several initiatives, namely [Forecasting _Our World In Data_](https://www.metaculus.com/tournament/forecasting-Our-World-in-Data/) ([a](http://web.archive.org/web/20221113203511/https://www.metaculus.com/tournament/forecasting-Our-World-in-Data/)), an [AI forecasting team](https://ea.greaterwrong.com/posts/9dqyakpjfhuo2bmjn/metaculus-is-building-a-team-dedicated-to-ai-forecasting) ([a](http://web.archive.org/web/20221018164037/https://ea.greaterwrong.com/posts/9dqyakpjfhuo2bmjn/metaculus-is-building-a-team-dedicated-to-ai-forecasting)), a ["Red Lines in Ukraine"](https://www.metaculus.com/project/red-lines/) ([a](http://web.archive.org/web/20221022082724/https://www.metaculus.com/project/red-lines/)) project, and a ["FluSight Challenge 2022/23"](https://www.metaculus.com/tournament/flusight-challenge22-23/) ([a](http://web.archive.org/web/20221114021149/https://www.metaculus.com/tournament/flusight-challenge22-23/)). They are also [hiring](https://www.metaculus.com/tournament/flusight-challenge22-23/) ([a](http://web.archive.org/web/20221114021149/https://www.metaculus.com/tournament/flusight-challenge22-23/))
Metaculus [erroneously resolved](https://nitter.it/daniel_eth/status/1576842503210221568) ([a](https://web.archive.org/web/20221115163921/https://nitter.it/daniel_eth/status/1576842503210221568)) a question on whether there would be a nuclear detonation in Ukraine by 2023.
#### Manifold
An edition of the [Manifold Markets newsletter](https://news.manifold.markets/p/above-the-fold-visualising-market) ([a](https://web.archive.org/web/20221115163946/https://news.manifold.markets/p/above-the-fold-visualising-market)) includes this neat visualization of a group of markets through time:
[<img src="https://i.imgur.com/36ev880.gif" class='.img-medium-center'>](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6a94b58-5e91-4645-ac9d-51465d75cd84_1440x450.gif)
Manifold's [newsletter](https://news.manifold.markets/p/above-the-fold-visualising-market) ([a](https://web.archive.org/web/20221115163946/https://news.manifold.markets/p/above-the-fold-visualising-market)) also has further updates, including on their bot for Twitch. They continue to have a high development speed.
#### Odds and ends
The US midterm elections were an eagerly awaited event in the prediction market world. Participating so as to make a profit requires a level of commitment, focus and sheer fucking will that I recognize I don't have. For coverage, interested readers might want to look to [StarSpangledGamblers](https://starspangledgamblers.com/) ([a](http://web.archive.org/web/20221109002400/https://starspangledgamblers.com/)), or check on the Twitters of various politics bettors, such as [Domah](https://twitter.com/Domahhhh) ([a](http://web.archive.org/web/20220816131937/https://twitter.com/Domahhhh)), [Peter Wildeford](https://twitter.com/peterwildeford) ([a](http://web.archive.org/web/20221109171713/https://twitter.com/peterwildeford)) or [iabvek](https://twitter.com/iabvek) ([a](http://web.archive.org/web/20210629221133/https://twitter.com/iabvek)).
My forecasting group, Samotsvety, posted an [estimate of the likelihood](https://samotsvety.org/blog/2022/10/03/samotsvety-nuclear-risk-update-october-2022/) that Russia would use a nuclear weapon, including a [calculator](https://squiggle-tweaker.vercel.app/?code=eNqVU21P20AM%2FitWPzVVaVOgvEl82AYa1SpAKh2aliGZxElPXO6yuwsQAf99zqWl0A4QXy53ju3n8WP7oWVn%2Bm5S5jmaqnXgTEldbzpOhNNmYRFKOIFy8rcUWSZp4oxQWeug1e%2FDCcmCjI0UZpmhDB210ZgADqE9Ftb1DCVl7G1dGHTh4RHjuAu3KB%2BBbx2%2BPAXQh3aO901gB3Kh%2FDUIrtqD%2FjCIVKQY6bSMJaGB8sagUBQpU1qBU0t2%2FueSsNDKjtS08WAOS1K%2Fw97mbhfCXrjtz01%2FhgP%2F2ffnnj93%2FwQMFs9QZQQ5NdDaEbgZOj4IuF6hE9ApOJETCAuSUgelsgXFIhWUrDA%2B%2FXJxFimyMUp0Qquz0lmR0Jzmd3FLC8qXyMXcULJKfTB8wTMcbLGW4VV7Yxh4gy9iy1%2BHq%2ByXqBe65jFVsVbcXDagZJgPRexECuAz3P9TPDgNav4eM7xWffae8RA5zd7XIpv%2BmE6%2BCVf5DD7r8RuQq9L4djb67O%2FvNToNX7bYf9Z0eYG5Ksn7knWgFuRTlOeKjPICYwdCgdTWwUyX9d4YytmHpRiLlI7veYQcqrgaqV%2BslmU622Et304IrwtIsLKN0yFs7QwjVRjNq%2BaYyEmdeaSOsOLwnTp6sLcSjdeSLnRNuKCvlGpT68oyvfbilIU2vqD0nHQhaaSa%2Fl3O9JFoYnZ5yDxtSsZclwfnH28K7AeqPYANWCcRzOX9ELhx%2B0A7j7TUaZl7Tafa7nt0bvQ106og07zirBxKq3kieNN53kAkQttKxYa7HEPK7dSMY8t4BmibDBsw4v%2BElrdYG0ho8XjGZUXqIUCI%2BfZ%2BjHgemVfeP1GWBPi8ByJT5Nna0twygMqakDttZAKYOjL8WmzgnVBsmKfq9Xr1ePrXkbCFrMV4qPXgrV9t68G6qfGM1FPr6R87mSmY) so that people could more easily input their own estimates. This was followed by the [Swift Institute](https://www.swiftcentre.org/will-russia-use-a-nuclear-weapon/) ([a](https://web.archive.org/web/20221115164115/https://www.swiftcentre.org/will-russia-use-a-nuclear-weapon/)), and both estimates were reported in [WIRED magazine](https://www.wired.co.uk/article/micromorts-nuclear-war). Since then the probability seems much lower, as the strategic situation becomes clearer.
Some more pessimistic forecasts by [Max Tegmark](https://t.co/HTKLphcOxG) ([a](https://web.archive.org/web/20221008130254/https://www.lesswrong.com/posts/Dod9AWz8Rp4Svdpof/why-i-think-there-s-a-one-in-six-chance-of-an-imminent)) were seen by [Elon Musk](https://twitter.com/elonmusk/status/1579099787864903680?lang=en), and may have played a role in Musks [refusal](https://www.businessinsider.com/elon-musk-blocks-starlink-in-crimea-amid-nuclear-fears-report-2022-10?r=US&IR=T) ([a](https://web.archive.org/web/20221011175640/https://www.businessinsider.com/elon-musk-blocks-starlink-in-crimea-amid-nuclear-fears-report-2022-10?r=US&IR=T)) for Ukraine to use his Starlink service over Crimea.
One of the sharpest prediction market bettors [objected](https://twitter.com/Domahhhh/status/1582128628472872960) to the above estimates, and I [followed up](https://twitter.com/NunoSempere/status/1582160854434209792) with some discussion.
Superforecaster Anneinak correctly goes with her gut—in contraposition with the polls—on the [Alaskan Congressional elections](https://www.gjopen.com/comments/1514904) ([a](https://web.archive.org/web/20221115164038/https://www.gjopen.com/comments/1514904)).
An academic initiative by the name of [CRUCIAL](https://www.crucialab.net) ([a](https://web.archive.org/web/20221115164150/https://www.crucialab.net/)) is looking at predicting climate change effects using prediction markets.
### Opportunities
The [Council on Strategic Risks](https://councilonstrategicrisks.org/) ([a](http://web.archive.org/web/20221102172141/https://councilonstrategicrisks.org/)) is hiring for a [full-time Strategic Foresight Senior Fellow](https://councilonstrategicrisks.org/2022/10/11/were-hiring-strategic-foresight-fellow/) ([a](http://web.archive.org/web/20221020014121/https://councilonstrategicrisks.org/2022/10/11/were-hiring-strategic-foresight-fellow/)), and is offering $78,000 to $114,000 per year plus benefits. My impression is that this post would be impactful and policy-relevant.
The [$5k challenge to quantify the impact of 80,000 hours' top career paths](https://forum.effectivealtruism.org/posts/noDYmqoDxYk5TXoNm/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top) ([a](http://web.archive.org/web/20221030044825/https://forum.effectivealtruism.org/posts/noDYmqoDxYk5TXoNm/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top)) is still open, until the 1st of December. So far I only know of two applications, and since the pot is split between the participants, participation might have a particularly high expected monetary value.
### Research
#### Shortform
Katja Grace looks at her [calibration in 1000 predictions](https://worldspiritsockpuppet.substack.com/p/calibration-of-a-thousand-predictions) ([a](https://web.archive.org/web/20221115164233/https://worldspiritsockpuppet.substack.com/p/calibration-of-a-thousand-predictions)):
[<img src="https://i.imgur.com/FHAGMbl.png" class='.img-medium-center'>](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F65bf5450-edef-4e2d-bd1e-c8ee9fbca01a_648x630.png)
Callum McDougall writes [Six (and a half) intuitions for KL divergence](https://www.perfectlynormal.co.uk/blog-kl-divergence) ([a](https://web.archive.org/web/20221112073956/https://www.perfectlynormal.co.uk/blog-kl-divergence)).
Terence Tao has [two](https://nitter.it/daniel_eth/status/1576842503210221568) ([a](https://web.archive.org/web/20221115163921/https://nitter.it/daniel_eth/status/1576842503210221568)) introductory [blogposts](https://terrytao.wordpress.com/2022/10/07/a-bayesian-probability-worksheet/) ([a](http://web.archive.org/web/20221112164011/https://terrytao.wordpress.com/2022/10/07/a-bayesian-probability-worksheet/)) on Bayesian probability theory.
I posted [Five slightly more hardcore Squiggle models](https://forum.effectivealtruism.org/posts/BDXnNdBm6jwj6o5nc/five-slightly-more-hardcore-squiggle-models) ([a](https://nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/)).
#### Longform
I came across this really neat explanation of Markov Chain Monte Carlo: [Markov Chain Monte Carlo Without all the Bullshit](https://jeremykun.com/2015/04/06/markov-chain-monte-carlo-without-all-the-bullshit/) ([a](http://web.archive.org/web/20221101105429/https://jeremykun.com/2015/04/06/markov-chain-monte-carlo-without-all-the-bullshit/)). It requires knowledge of linear algebra, but it was otherwise really neat. I would encourage readers who have heard about the method but never learnt how it works to give it a read.
Sam Nolan &co create estimates [explicitly quantifying](https://forum.effectivealtruism.org/posts/Nb2HnrqG4nkjCqmRg/quantifying-uncertainty-in-givewell-ceas) ([a](https://archive.ph/0kY8q)) the uncertainty in GiveWells cost-effectiveness analyses.
[<img src="https://i.imgur.com/1VifplX.png" class='.img-medium-center'>](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F93e54c3d-f5df-495a-90e8-d3dc3482e2ef_1462x567.png)
---
Note to the future: All links are added automatically to the Internet Archive, using this [tool](https://github.com/NunoSempere/longNowForMd) ([a](http://web.archive.org/web/20220711161908/https://github.com/NunoSempere/longNowForMd)). "(a)" for archived links was inspired by [Milan Griffes](https://www.flightfromperfection.com/) ([a](http://web.archive.org/web/20221109123552/https://www.flightfromperfection.com/)), [Andrew Zuckerman](https://www.andzuck.com/) ([a](http://web.archive.org/web/20220316214638/https://www.andzuck.com/)), and [Alexey Guzey](https://guzey.com/) ([a](http://web.archive.org/web/20221111053555/https://guzey.com/)).
---
> \> In 1646, Magnenus estimated the number of atoms contained in a piece of incense from an argument based on the sense of smell (if a fraction of the grain is burned, the number of particles can be estimated from the volume within which the scent is still perceptible). His estimate for the number of particles in a piece of incense "not larger than a pea" was of the order of 10^18. This estimate is remarkably accurate, within about three orders of magnitude of the true value (based on the number of molecules in the unburned incense) and thus only one order of magnitude off in linear dimension of the molecule. Magnenus was by far the earliest scholar to give a reasonable estimate for the size of a molecule, the first "modern" estimate was given more than 200 years later, in 1865, by Josef Loschmidt
>
> — Wikipedia, on [Johann Chrysostom Magnenus](https://en.wikipedia.org/wiki/Johann_Chrysostom_Magnenus)
---

@ -0,0 +1,280 @@
## Description:
## Libraries
### Install
# install.packages("ggplot2")
# install.packages("readr")
### Load
library("ggplot2")
library("readr")
library("ggthemes")
library("magrittr")
library("RColorBrewer")
library("ggsci")
## Data import
setwd("/home/loki/Documents/core/ea/fresh/misc/openphil-funding")
data <- read.csv("grants.csv", header=TRUE, stringsAsFactors = FALSE)
## Data cleaning
colnames(data)
getYear <- function(dateRow){
year = strsplit(dateRow, " ")[[1]][2]
return(year)
}
getYear(data$Date[1])
as.vector(sapply(data$Date, getYear))
df <- list()
df$year <- as.vector(sapply(data$Date, getYear))
df$amount <- as.vector(sapply(data$Amount, parse_number))
df$amount <- ifelse(is.na(df$amount), 0, df$amount)
df$area <- as.vector(data$Focus.Area)
df <- as.data.frame(df)
df$area <- as.vector(data$Focus.Area) # not sure why this line is needed, but things break otherwise
# View(df)
## Classify according to areas
areas <- unique(df$area)
ea_growth <- c("Effective Altruism Community Growth", "Effective Altruism Community Growth (Global Health and Wellbeing)")
global_health <- c("South Asian Air Quality", "Human Health and Wellbeing", "GiveWell-Recommended Charities", "Global Aid Policy", "Global Health & Wellbeing", "Global Health & Development","Science for Global Health")
longtermism <- c("Biosecurity & Pandemic Preparedness", "Potential Risks from Advanced AI", "Science Supporting Biosecurity and Pandemic Preparedness", "Longtermism")
animal_welfare <- c("Farm Animal Welfare", "Broiler Chicken Welfare", "Cage-Free Reforms", "Alternatives to Animal Products")
scientific_research <- c("Transformative Basic Science", "Scientific Research", "Other Scientific Research Areas", "Scientific Innovation: Tools and Techniques")
politicy_advocacy <- c("Land Use Reform","Macroeconomic Stabilization Policy", "Criminal Justice Reform", "Immigration Policy")
not_other <- c(ea_growth, global_health, longtermism, animal_welfare, scientific_research, politicy_advocacy)
other <- areas[!(areas %in% not_other)]
df$area <- ifelse(df$area %in% ea_growth, "EA Community Building", df$area)
df$area <- ifelse(df$area %in% global_health, "Global Health and Wellbeing", df$area)
df$area <- ifelse(df$area %in% longtermism, "Longtermism & GCRs", df$area)
df$area <- ifelse(df$area %in% animal_welfare, "Animal Welfare", df$area)
df$area <- ifelse(df$area %in% scientific_research, "Scientific Research", df$area)
df$area <- ifelse(df$area %in% politicy_advocacy, "Policy Advocacy", df$area)
df$area <- ifelse(df$area %in% other, "Other", df$area)
df$area
## Aggregate by year and area
years <- c(2014: 2022)# as.vector(unique(df$year))
num_years <- length(years)
area_names <- as.vector(unique(df$area))
num_areas <- length(area_names)
df2 <- list()
df2$area <- sort(rep(area_names, num_years))
df2$year <- rep(years, num_areas)
df2 <- as.data.frame(df2)
getAmountForYearAreaPair <- function(a_df, target_year, target_area){
filter = dplyr::filter
# target_year = 2022
# target_area = "Longtermism"
rows = a_df %>% filter(year == target_year) %>% filter(area == target_area)
return(sum(rows$amount))
}
getAmountForYearAreaPair(df, 2022, "Longtermism & GCRs")
getAmountForArea <- function(a_df, target_area){
filter = dplyr::filter
rows = a_df %>% filter(area == target_area)
return(sum(rows$amount))
}
getAmountForArea(df, "Longtermism & GCRs")
amounts <- c()
for(i in c(1:dim(df2)[1])){
amount <- getAmountForYearAreaPair(df2, df2$year[i], df2$area[i])
amounts <- c(amounts, amount)
}
df2$amount <- amounts
## Order by cummulative amount
df2$cummulative_amount_for_its_area = sapply(df2$area, function(area) {
return(getAmountForArea(df, area))
})
## Plotting
title_text="Open Philanthropy allocation by year and cause area"
subtitle_text="with my own aggregation of categories"
palette = "Classic Red-Blue"
direction = -1
open_philanthropy_plot <- ggplot(data=df2, aes(x=year, y=amount, fill=area, group = cummulative_amount_for_its_area))+
geom_bar(stat="identity")+
labs(
title=title_text,
subtitle=subtitle_text,
x=element_blank(),
y=element_blank()
) +
# scale_fill_wsj() +
# scale_fill_tableau(dir =1) +
# scale_fill_tableau(palette, dir=direction) +
# scale_fill_viridis(discrete = TRUE) +
# scale_fill_brewer(palette = "Set2") +
scale_fill_d3( "category20", alpha=0.8) +
# scale_fill_uchicago("dark") +
# scale_fill_startrek() +
scale_y_continuous(labels = scales::dollar_format(scale = 0.000001, suffix = "M"), breaks = c(0:6)*10^8)+
scale_x_continuous(breaks = years)+
theme_tufte() +
theme(
legend.title = element_blank(),
plot.title = element_text(hjust = 0.5),
plot.subtitle = element_text(hjust = 0.5),
legend.position="bottom",
legend.box="vertical",
axis.text.x=element_text(angle=60, hjust=1),
legend.text=element_text(size=7, hjust = 0.5)
) +
guides(fill=guide_legend(nrow=3,byrow=TRUE))
open_philanthropy_plot
getwd() ## Working directory on which the file will be saved. Can be changed with setwd("/your/directory")
height = 5
width = 5
ggsave(plot=open_philanthropy_plot, "open_philanthropy_grants_stacked.png", width=width, height=height, bg = "white")
## Including Dustin Moskovitz's wealth
coeff <- 10^7*4
wealth <- c(6, 8, 12, 15, 18, 12, 14, 19, 14)
df2$wealth <- rep(wealth * coeff, num_areas)
make_fortune_plot <- function(show_fortune_legend = FALSE) {
open_philanthropy_plot_with_fortune <- ggplot(data=df2, aes(x=year, y=amount, fill=area, group = cummulative_amount_for_its_area))+
geom_bar(stat="identity")+
geom_point(
aes(x=year, y=wealth), size=2, color="darkblue", shape=4,
show.legend=show_fortune_legend
)+
labs(
title=title_text,
subtitle=subtitle_text,
x=element_blank(),
y=element_blank()
) +
# scale_fill_wsj() +
# scale_fill_tableau(dir =1) +
# scale_fill_tableau(palette, dir=direction) +
# scale_fill_viridis(discrete = TRUE) +
# scale_fill_brewer(palette = "Set2") +
scale_fill_d3( "category20", alpha=0.8) +
# scale_fill_uchicago("dark") +
# scale_fill_startrek() +
scale_y_continuous(
labels = scales::dollar_format(scale = 0.000001, suffix = "M"),
name="OpenPhil donations",
breaks = c(0:5)*10^8,
sec.axis = sec_axis(
~.*1,
name="Dustin Moskovitz's fortune\n(est. Bloomberg)",
breaks = seq(0,20,by=5)*coeff,
labels = c("$0B", "$5B","$10B","$15B", "$20B")
),
limits=c(0,8*10^8)
)+
scale_x_continuous(breaks = years)+
theme_tufte() +
theme(
legend.title = element_blank(),
plot.title = element_text(hjust = 0.5),
plot.subtitle = element_text(hjust = 0.5),
legend.position="bottom",
legend.box="vertical",
axis.text.x=element_text(angle=60, hjust=1),
axis.title.y = element_text(vjust=3, hjust=0.25, size=10),
axis.title.y.right = element_text(vjust=3, hjust=0.5, size=10),
legend.text=element_text(size=8)
) +
guides(fill=guide_legend(nrow=4,byrow=TRUE))
# open_philanthropy_plot_with_fortune
height = 6
width = 5
filename = ifelse(
show_fortune_legend,
"open_philanthropy_plot_with_fortune.png",
"open_philanthropy_plot_with_fortune_clean_labels.png"
)
ggsave(plot=open_philanthropy_plot_with_fortune, filename, width=width, height=height, bg = "white")
}
make_fortune_plot(TRUE)
make_fortune_plot(FALSE)
## Look at the different longtermist areas independently.
longtermism <- c("Biosecurity & Pandemic Preparedness", "Potential Risks from Advanced AI", "Science Supporting Biosecurity and Pandemic Preparedness", "Longtermism")
df3 <- list()
df3$year <- as.vector(sapply(data$Date, getYear))
df3$amount <- as.vector(sapply(data$Amount, parse_number))
df3$amount <- ifelse(is.na(df$amount), 0, df$amount)
df3$area <- as.vector(data$Focus.Area)
df3 <- as.data.frame(df3)
df3$area <- as.vector(data$Focus.Area)
df3 <- df3 %>% dplyr::filter(area %in% longtermism)
# View(df3)
years <- c(2014: 2022) # as.vector(unique(df$year))
num_years <- length(years)
area_names <- longtermism
num_areas <- length(area_names)
df4 <- list()
df4$area <- sort(rep(area_names, num_years))
df4$year <- rep(years, num_areas)
df4 <- as.data.frame(df4)
# View(df4)
getAmountForYearAreaPair(df3, 2022, "Longtermism")
amounts <- c()
for(i in c(1:dim(df4)[1])){
amount <- getAmountForYearAreaPair(df3, df4$year[i], df4$area[i])
amounts <- c(amounts, amount)
}
df4$amount <- amounts
df4$cummulative_amount_for_its_area = sapply(df4$area, function(area) {
return(getAmountForArea(df3, area))
})
## Plotting longtermist funding
title_text="Open Philanthropy allocation by year and cause area"
subtitle_text="restricted to longtermism & GCRs"
palette = "Classic Red-Blue"
direction = -1
open_philanthropy_plot_lt <- ggplot(data=df4, aes(x=year, y=amount, fill=area, group=cummulative_amount_for_its_area))+
geom_bar(stat="identity")+
labs(
title=title_text,
subtitle=subtitle_text,
x=element_blank(),
y=element_blank()
) +
# scale_fill_wsj() +
# scale_fill_tableau(dir =1) +
# scale_fill_tableau(palette, dir=direction) +
# scale_fill_viridis(discrete = TRUE) +
# scale_fill_brewer(palette = "Set2") +
scale_fill_d3( "category20", alpha=0.8) +
# scale_fill_uchicago("dark") +
# scale_fill_startrek() +
scale_y_continuous(labels = scales::dollar_format(scale = 0.000001, suffix = "M"))+
scale_x_continuous(breaks = years)+
theme_tufte() +
theme(
legend.title = element_blank(),
plot.title = element_text(hjust = 0.5),
plot.subtitle = element_text(hjust = 0.5),
legend.position="bottom",
legend.box="vertical",
axis.text.x=element_text(angle=60, hjust=1),
legend.text=element_text(size=7)
) +
guides(fill=guide_legend(nrow=3,byrow=TRUE))
getwd() ## Working directory on which the file will be saved. Can be changed with setwd("/your/directory")
height = 5
width = 6
## open_philanthropy_plot_lt
ggsave(plot=open_philanthropy_plot_lt, "open_philanthropy_grants_lt.png", width=width, height=height, bg = "white")

@ -0,0 +1,7 @@
#!/bin/bash
cd /home/loki/Documents/core/ea/fresh/misc/openphil-funding
# wget https://www.openphilanthropy.org/wp-admin/admin-ajax.php?action=generate_grants&nonce=1920c9d172 -O grants.csv
# rm admin-ajax.php?action=generate_grants
Rscript analysis.R

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

@ -0,0 +1,50 @@
Some data on the stock of EA™ funding
=====================================
### Overall Open Philanthropy funding
Open Philanthropy's allocation of funding through time looks as follows:
![Bar graph of OpenPhil allocation by year. Global health leads for most years. Catastrophic risks are usually second since 2017. Overall spend increases over time.](https://i.imgur.com/rqwFsiN.png)
Dustin Moskovitz's wealth looks, per [Bloomberg](https://www.bloomberg.com/billionaires/profiles/dustin-a-moskovitz), like this:
![Line chart of Dustin Moskovitz's wealth over time, with a dip in 2019 and a peak in 2021.](https://i.imgur.com/cObIgOQ.png)
If we plot the two together, we don't see that much of a correlation:
![Combination of the previous two charts. Moskovitz's fortune does not match changes in total spend or category composition.](https://i.imgur.com/kCDWe8o.png)
Holden Karnofsky, head of Open Philanthropy, [writes](https://forum.effectivealtruism.org/posts/mCCutDxCavtnhxhBR/some-comments-on-recent-ftx-related-events) that the Blomberg estimates might not be all that accurate:
> Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tunas net worth give a substantially understated picture of our available resources. Thats because, among other issues, they dont include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume)
Edited to add: Moskovitz replies:
<blockquote class="twitter-tweet tw-align-center"><p lang="en" dir="ltr">Actually the Bloomberg tracker looks pretty close, though missing 3B or so of foundation assets. The Forbes one is like half the Bloomberg estimate 🤷‍♂️</p>&mdash; Dustin Moskovitz (@moskov) <a href="https://twitter.com/moskov/status/1594337871355207680?ref_src=twsrc%5Etfw">November 20, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
In mid 2022, Forbes put Sam Bankman-Fried's wealth at [$24B](https://www.forbes.com/profile/sam-bankman-fried/?sh=706b96804449). So in some sense, the amount of money allocated to or according to Effective Altruism™ peaked somewhere close to $50B.
### Funding flow restricted to longtermism & global catatrophic risks (GCRs)
The analysis becomes a bit more interesting if we look only at longtermism and GCRs:
![Bar graph of OpenPhil allocation to catastrophic risks by year. AI leads most years, followed by biosecurity.](https://i.imgur.com/Z2dMjfN.png)
In contrast, per [Forbes](https://web.archive.org/web/20221116022228/https://fortune.com/2022/11/14/balkman-fried-ftx-collapse-threatens-effective-altruism-billions-charity-philanthropy/), the FTX Foundation had given out $160M by September 2022. My sense is that most (say, maybe 50% to 80%) of those grants went to "longtermist" cause areas, broadly defined. In addition, SBF and other FTX employees led a $580M funding round for [Anthropic](https://www.privateequitywire.co.uk/2022/05/05/314319/ftx-ceo-leads-580m-series-b-round-anthropic)
### Further analysis
It's unclear what would have to happen for Open Philanthropy to pick up the slack here. In practical terms, I'm not sure whether their team has enough evaluation capacity for an additional $100M/year, or whether they will choose to expand that.
Two somewhat informative posts from Open Philanthropy on this are [here](https://forum.effectivealtruism.org/posts/HPdWWetJbv4z8eJEe/open-phil-is-seeking-applications-from-grantees-impacted-by) and [here](https://forum.effectivealtruism.org/posts/mCCutDxCavtnhxhBR/some-comments-on-recent-ftx-related-events)
I'd be curious about both interpretative analysis and forecasting on these numbers. I am up for supporting the later by e.g., committing to rerunning this analysis in a year.
### Appendix: Code
The code to produce these plots can be found [here](./.source/analysis.R); lines 42 to 48 make the division into categories fairly apparent. To execute this code you will need a working R installation and a document named [grants.csv](./.source/grants.csv), which can be downloaded from [Open Philanthropy's website](https://www.openphilanthropy.org/grants/).
<p><section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

@ -0,0 +1,222 @@
List of past fraudsters similar to SBF
==============
To inform my forecasting around FTX events, I looked at the [Wikipedia list of fraudsters](https://en.wikipedia.org/wiki/List_of_fraudsters) and selected those I subjectively found similar—you can see a spreadsheet with my selection [here](https://docs.google.com/spreadsheets/d/1N_n3HDzrZtVw8XrToZw9U0Uz61Fyd3sF8m_FiZqlCPE/edit#gid=0). For each of the similar fraudsters, I present some common basic details below together with some notes.
My main takeaway is that many salient aspects of FTX have precedents: the incestuous relationship between an exchange and a trading house (Bernie Madoff, Richard Whitney), a philosophical or philanthropic component (Enric Duran, Tom Petters, etc.), embroiling friends and families in the scheme (Charles Ponzi), or multi-billion fraud not getting found out for years (Elizabeth Holmes, many others).
### Fraud with a philosophical, philanthropic or religious component
#### [Bernard Ebbers](https://en.wikipedia.org/wiki/Bernard_Ebbers)
* Prison: Yes
* Jurisdiction: US 
* Amount: $18B (all amounts are approximate and inflation-adjusted using [in2013dollars.com](https://www.in2013dollars.com/))
I find the section on his faith most informative:
> While CEO of WorldCom, he was a member of the Easthaven Baptist Church in Brookhaven, Mississippi. As a high-profile member of the congregation, Ebbers regularly taught Sunday school and attended the morning church service with his family. His faith was overt, and he often started corporate meetings with prayer. When the allegations of conspiracy and fraud were first brought to light in 2002, Ebbers addressed the congregation and insisted on his innocence. "I just want you to know you aren't going to church with a crook," he said. "No one will find me to have knowingly committed fraud."
Also note that eventually, $8B was restored to investors.
#### [Enric Durán](https://en.wikipedia.org/wiki/Enric_Duran)
* Prison: No, life in hiding
* Jurisdiction: Spain
* Amount: ~$700k
During the 2008 crisis, he robbed Spanish banks by taking out spurious loans, and donated the amounts to anticapitalist causes. He wrote a guide about how to do this, and widely distributed it: an online version in Spanish and English can be found [here](https://www.bibliotecapleyades.net/sociopolitica/sociopol_globalbanking25.htm).
Personal takeaway: Stealing money for altruistic causes is not unprecedented. And if you _are_ going to cross that line, it can be done with much more style. It will also be viewed much more sympathetically if you steal from organizations perceived to be corrupt.
#### [Tom Petters](https://en.wikipedia.org/wiki/Tom_Petters)
* Prison: Yes
* Jurisdiction: US 
* Amount: ~$5B
Some of his donations were later returned (bold emphasis my own):
> Petters was appointed to the board of trustees for the College of St. Benedict in 2002; his mother had attended the school. In 2006 he gave $2 million for improvements to St. John's Abbey on the campus of adjacent Saint John's University. In light of the criminal prosecution, St. John's Abbey arranged to return the $2 million gift to the court-appointed receiver for the Petters bankruptcy. In October 2007, Petters made a $5.3 million gift to the College of St. Benedict to create the Thomas J. Petters Center for Global Education. In 2006, he served as a co-chairman of a capital campaign at his high school, Cathedral High School, and offered to match donations up to $750,000.
> Petters formed the John T. Petters Foundation to provide gifts and endowments at select universities to benefit future college students. The foundation was formed to honor his son, John Thomas Petters, who was killed on a visit in 2004 to Florence, Italy. The college student inadvertently wandered onto private property where the owner, Alfio Raugei, mistook him for an intruder and stabbed him to death." In response, in September 2004, Tom Petters pledged $10 million to his late son's college, Miami University. **He later promised an additional $4 million, with the total to support two professorships and the John T. Petters Center for Leadership, Ethics and Skills Development** within the Farmer School of Business. **Miami University has since returned Petters' donation following his conviction**. Petters also donated $12 million to Rollins College in Winter Park, Florida, where he was a member of the Board of Trustees, to create two new faculty chairs in International Business.
#### [Allen Stanford](https://en.wikipedia.org/wiki/Allen_Stanford)
* Prison: Yes
* Jurisdiction: British Overseas Territory 
* Amount: $20B
This case is interesting because it also occurred overseas. Uninterestingly, Allen Stanford was caught in Virginia while visiting his girlfriend.
> A leaked cable message from the U.S. Embassy in the Bahamas reported as early as 2006 that companies under Stanford's control were "rumoured to engage in bribery, money laundering, and political manipulation". The U.S. Ambassador to the Bahamas at the time was reported to have "managed to stay out of any one-on-one photos with Stanford" during a charity breakfast event.
> A February 2009 Houston Chronicle article described Stanford as "the leading benefactor, promoter, employer and public persona" of Antigua and Barbuda. On November 1, 2006, Stanford was appointed Knight Commander of the Order of the Nation (KCN) of Antigua and Barbuda by the Antiguan government. Prince Edward, Earl of Wessex, joined the then Governor-General of Antigua and Barbuda, Sir James Carlisle, to make this announcement during the Silver Jubilee Independence Day Celebration. After being knighted, Stanford used the awarded title "Sir Allen" often; he was generally referred to as such both by Antiguans and internationally.
> In October 2009, the National Honours Committee of Antigua and Barbuda voted unanimously to strip Stanford of his knighthood.
### Fraud on top of a legitimate business
#### [Marc Dreier](https://en.wikipedia.org/wiki/Marc_Dreier)
* Prison: Yes
* Jurisdiction: US  
* Amount: $1B
Dreier seems to have been a very smart guy who initially didn't attain success. "I recall only that I was desperate for some measure of the success that I felt had eluded me. I lost my perspective and my moral grounding, and really, in a sense, I just lost my mind."
> Marc Dreier's only television interview aired in 2009 on 60 Minutes titled, “The Swindler,” which was hosted by Steve Kroft. Dreier notes that he thought he would be featured on 60 minutes for something good that he had done, not for something bad. Kroft asks Dreier a question that was asked of Bernie Madoff, who many people find similarities with, about how someone could have kept up a scam for so long. Dreier noted that he had multiple stressors simultaneously that kept up his focus: the scam, a legitimate law business (funded by the scam), and his work as a practicing attorney.
#### [Richard Whitney](https://en.wikipedia.org/wiki/Richard_Whitney_(financier))
* Prison: Yes
* Jurisdiction: US 
* Amount: $50M
This case is remarkable because he previously was the director of the New York Exchange, borrowed from his brother, and ultimately went bankrupt:
> Richard Whitney (August 1, 1888 December 5, 1974) was an American financier, president of the New York Stock Exchange from 1930 to 1935. He was later convicted of embezzlement and imprisoned.
> At the same time that Richard Whitney was achieving great success, his brother George had also prospered at Morgan bank and by 1930 had been anointed as the likely successor to bank president, Thomas W. Lamont. While Richard Whitney was assumed to be a brilliant financier, he in fact had personally been involved with speculative investments in a variety of businesses and had sustained considerable losses. To stay afloat, he began borrowing heavily from his brother George as well as other wealthy friends, and after obtaining loans from as many people as he could, turned to embezzlement to cover his mounting business losses and maintain his extravagant lifestyle. He stole funds from the New York Stock Exchange Gratuity Fund, the New York Yacht Club (where he served as the Treasurer), and $800,000 worth of bonds from his father-in-law's estate.
#### [Barry Minkow](https://en.wikipedia.org/wiki/Barry_Minkow)
* Prison: Yes
* Jurisdiction: US 
* Amount: $500M
> " While most Ponzi schemes are based on non-existent businesses, ZZZZ Best's carpet-cleaning division was very real and won high marks for its quality"
The Wikipedia page just _keeps on going_:
> " … the Los Angeles Police Department raided ZZZZ Best's headquarters and Minkow's home, and found evidence that the company was being used to launder drug profits for organized crime"
> After being released from jail, Minkow became a pastor and fraud investigator in San Diego, and spoke at churches and schools about ethics. This came to an end in 2011, when he admitted to helping deliberately drive down the stock price of homebuilder Lennar and was ordered back to prison for five years. Three years later, Minkow admitted to defrauding his own church and was sentenced to an additional five years in prison. He is subject to restitution requirements totaling $612 million.
#### [Bernie Madoff](https://en.wikipedia.org/wiki/Bernie_Madoff)
* Prison: Yes
* Jurisdiction: US 
* Amount: $24B
There was a similar structure of having a trading house and a hedge fund. However, industry insiders suspected it. Most of the money was eventually returned.
#### [Lou Pearlman](https://en.wikipedia.org/wiki/Lou_Pearlman)
* Prison: Yes
* Jurisdiction: US 
* Amount: $500M
Pearlman used income from the [Backstreet Boys](https://en.wikipedia.org/wiki/Backstreet_Boys) and other bands which he either created or managed, to prop up an almost nonexistent aviation company.
#### [Crazy Eddie](https://en.wikipedia.org/wiki/Crazy_Eddie)
* Prison: Yes
* Jurisdiction: US
* Amount: ~$100M
Shares a similar pattern of having fraud on top of a legitimate business.
#### [Samuel D. Waksal](https://en.wikipedia.org/wiki/Samuel_D._Waksal)
* Prison: Yes
* Jurisdiction: US 
* Amount: $10M to $10B
The Wikipedia page is unclear on whether the medicaments he pioneered were fraudulent, or whether the fraud was adjacent but unrelated.
### Other somehow subjectively similar cases
#### [Charles Ponzi](https://en.wikipedia.org/wiki/Charles_Ponzi)
* Prison: Yes 
* Jurisdiction: US 
* Amount: $200M
> Ponzi's investors even included those closest to him, like his chauffeur John Collins and his own brother-in-law. Ponzi was indiscriminate about whom he allowed to invest, from young newspaper boys investing a few dollars to high-net-worth individuals, like a banker from Lawrence, Kansas, who invested $10,000
#### [Tino De Angelis](https://en.wikipedia.org/wiki/Tino_De_Angelis)
* Prison: Yes
* Jurisdiction: US
* Amount: $2B
Also borrowed based on faulty collateral. After his prison term, he went on to scam further.
#### [Alves dos Reis](https://en.wikipedia.org/wiki/Alves_dos_Reis)
* Prison: Yes
* Jurisdiction: Portugal, Angola 
* Amount: Absurd. “0.88% of Portugal's nominal GDP at the time”, which would be ~$2B today.
Complicated plot to forge Portuguese banknotes in Angola, which was brought down when rivals accused it of being a front for the Germans, stopping a plan to become legitimate by acquiring the Bank of Portugal and hush away the fraudulent origins of the con.
#### [Jordan Belfort](https://en.wikipedia.org/wiki/Jordan_Belfort)
* Prison: Yes
* Jurisdiction: US 
* Amount: $400M
Fictionalized in the _Wolf of Wall Street_ film. Belfort ended up paying 50% of his future income towards restitution, and giving somewhat scammy motivational and training seminars.
EDIT h/t Greg Colbourn: The above is incorrect. Belfort was initially supposed to pay 50% of his salary, but sneaked out of it. See [here](https://forum.effectivealtruism.org/posts/jqJLcsqEqdnd35kTB/list-of-past-fraudsters-similar-to-sbf?commentId=4XbvuyyL5pLvGrBZQ#comments) for more details.
#### [Elizabeth Holmes](https://en.wikipedia.org/wiki/Elizabeth_Holmes)
* Prison: Yes
* Jurisdiction: US 
* Amount: $7B
Homes seems to have been a similarly charismatic funder. Though in this case, it seems like the enterprise was a fraud from the beginning.
A takeaway for me: Fraud is surprisingly common, and investors probably price it in. 
#### [Kenneth Lay](https://en.wikipedia.org/wiki/Jeffrey_Skilling) and [Jeffrey Skilling](https://en.wikipedia.org/wiki/Jeffrey_Skilling) ([Enron scandal](https://en.wikipedia.org/wiki/Enron_scandal))
* Prison: No (died before), yes
* Jurisdiction: US
* Amount: $40B
#### [James Paul Lewis Jr.](https://en.wikipedia.org/wiki/James_Paul_Lewis_Jr.)
* Prison: Yes
* Jurisdiction: US 
* Amount: $500M
Around half of his fortune was mandated to be returned, but only $11M was ultimately returned.
#### [Harshad Mehta](https://en.wikipedia.org/wiki/Harshad_Mehta)
* Prison: No (died before)
* Jurisdiction: India 
* Amount: $2B 
Also financed market speculation with worthless securities which he himself created.
#### [John Rigas](https://en.wikipedia.org/wiki/John_Rigas)
* Prison: Yes
* Jurisdiction: US 
* Amount: $5B
> The executives were accused of looting the corporation by concealing $2.3 billion in liabilities from corporate investors and of using corporation funds as their personal funds
#### [Ferdinand Ward](https://en.wikipedia.org/wiki/Ferdinand_Ward)
* Prison: Yes
* Jurisdiction: US  
* Amount: $200M
> Ward ran the company as a Ponzi scheme, claiming that he had inside access to government contracts, a claim which gained further credence when the president later joined the firm as a full partner.
#### [Whitaker Wright](https://en.wikipedia.org/wiki/Whitaker_Wright)
* Prison: No (suicide)
* Jurisdiction: US
* Amount: Unknown
> At this point Wright made his criminal error. To maintain an image of solvency and success, Wright kept pushing thousands of pounds from one of his companies to another in a series of "loans". This led to some misrepresentations on balance sheets. But when he announced that, despite the apparent prosperity of his group, there would be no dividends, people became suspicious. In December 1900, the companies collapsed. Wright fled, but was brought back to stand trial. The shock waves led to a panic in London's exchange. There were other losses. The humiliated Marquess of Dufferin and Ava died in 1902 in the midst of the investigation.
<p><section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

@ -0,0 +1,40 @@
# Political ideologies in tension.
Currently, the various dominant ideological strains (neoliberalism, progressivism, etc.) are each able to point out each other's flaws, but not convince their opponents of their merits. I think that Goodhart's law and analogies to presocratic Greek philosophy are two fruitful lenses through which to view this situation. I conclude that better political technology is needed.
### The situation
[Goodhart's law](https://arxiv.org/abs/1803.04585)[^1] points out that when you chase a metric, you will end up getting high scores by breaking the relationship between the metric and what initially made you care about it. For example, a few years ago the Spanish government wanted to reduce deaths from traffic accidents—a laudable goal—and ended up achieving this by changing the calculation methodology for "traffic accident deaths" so as to not include people who died from traffic accidents in the hospital a few days later.
An enlightened progressive might correctly point out that a similar dynamic applies to letting laissez-faire market capitalism run amok. It will similarly lead to situations which maximize corporations' profits at the cost of human flourishing. On the less harmful side, this will result in addictive games or social media platforms. On the perverse side, this will lead to deals like the [Sumangali form of child labor](https://en.wikipedia.org/wiki/Sumangali_%28child_labour%29), where one side of the trade can't evaluate it and is thus reliably exploited. Therefore, labor protection laws and the paternal hand of the state is needed. To be clear, the argument isn't that laissez-faire market capitalism sometimes produces bad outcomes, but rather that in the absence of regulations, profit will be optimized for *at the expense* of human flourishing, and directly traded-off against it.
Conversely, neoliberals correctly see that relying on the paternal hand of the state has its own problems, like tremendous inefficiency compared to the private sector, [pork-barrel spending](https://en.wikipedia.org/wiki/Pork_barrel), myopic policies resulting from short election cycles, or corruption of the political process through [revolving doors](https://en.wikipedia.org/wiki/Revolving_door_%28politics%29) and ultimately [regulatory capture](https://en.wikipedia.org/wiki/Regulatory_capture). And market forces can indeed be a potent force to bring prosperity.
So we are in a situation where each side can make a convincing case that the other is wrong to its own followers, but isn't able to address the objections from the other side. This reminds me of presocratic philosophy, where Thale proposed that everything was made out of water and Anaximenes proposed that everything was made out of air. Like politics today, each could see that the other was wrong, but they couldn't convince the other side.
What is causing this situation is that both sides are wrong. More specifically, both sides have an [inner rethorical contradiction](https://en.wikipedia.org/wiki/Aporia#Definitions) in that they are strongly pushing for an imperfect mechanism that doesn't always aim for human flourishing, but pretending that the mechanism that they are pushing for will always result in human flourishing, no matter how strongly they push.
![Three arrows pointing in roughly the same direction, but with small differences. Two arrows represent two ideologies, another represents human flourishing. Although the human flourishing arrows and the ideology arrows seem close, they eventually start to diverge.](.images/diagram-1.png)
### Proposed solutions
At the individual level, a solution to this might involve something like following a Kantian categorical imperative: don't take actions—like profiting at the expense of other people's flourishing—that would make society shittier if other people did them as well. This might apply to personally working for a tobacco company, or personally making annoying advertisements.
At the societal level, the solution is less clear, and I think requires some ideation. Some solutions which I brainstorm were:
- Rely on charismatic political figures who are able to live with the tension between ideologies, and recognize the value in both.
- Come up with better mechanisms to align governments and human flourishing
- Come up with better mechanisms to align markets and human flourishing
- Periodically refound states and institutions to rid them of ideological cruft and realign them with human flourishing
- Have more innovation around forms of government, perhaps within smaller states or with entities such as [Próspera](https://prospera.hn)
- Read the literature on [regulatory economics](https://en.wikipedia.org/wiki/Regulatory_economics) and related areas.
Oddly enough, Effective Altruism (EA), a social movement that I am very sympathetic to, doesn't have great answers here. What it recommends is, greatly simplified, "rigorously rank problems in the world and start working on them in order of importance". But this isn't enough to run a state. In fact, it's even worse, because one of the main EA organizations, Open Philanthropy, doesn't trust itself enough to be able to specify "the good", and so employs a kludge called "worldview diversification" in the meantime.
Other ideologies also have their own ideas here. I like the vision sketched in [State Capacity Libertarianism](https://marginalrevolution.com/marginalrevolution/2020/01/what-libertarianism-has-become-and-will-become-state-capacity-libertarianism.html). My sense is that some recent feminist/progressive anthropology and social science also has the aim of highlighting or conceiving new types of societies in whose image we would improve our own society.
[^1]: In fact, [Goodhart's law](https://en.wikipedia.org/wiki/Goodhart%27s_law), also known as the [Lucas critique](https://en.wikipedia.org/wiki/Lucas_critique) says something narrower, and it's the
---
OP doesn't optimize too far because it doesn't believe it has good measures of the good, and so fears falling prey to Goodhart's law. But it could develop better measures which could allow it to deploy more optimization power.
This explains their current attachment to worldview diversification

@ -0,0 +1,44 @@
Goodhart's law and aligning politics with human flourishing
===========================================================
*Note: Written for someone I've been having political discussions with. For similarly introductory content, see [A quick note on the value of donations](https://nunosempere.com/blog/2022/04/06/note-donations/).*
The world's major ideologies, like neoliberalism and progressivism, are stuck in a stalemate. They're great at pointing out each other's flaws, but neither side can make a compelling case for itself the other can't poke holes into. To understand why, I want to look to Goodhart's law and presocratic Greek philosophy for insights. Ultimately, though, I think we need better political tools to better align governments and institutions with human flourishing.
### The situation
[Goodhart's law](https://arxiv.org/abs/1803.04585) cautions against chasing a metric. When you do, you will end up achieving high scores in a way which undermines the metric's original purpose. For example, a few years ago the Spanish government aimed to reduce deaths from traffic accidents—a worthy goal. But instead of actually saving lives, they simply changed the definition of "traffic accident deaths" to exclude people who died in the hospital days after the accident.
Progressives see this happening with unregulated capitalism. Without regulation, corporations will prioritize profits, even if it means sacrificing human well-being. For example, companies may create addictive games or social media platforms that monopolize our attention. In more extreme cases, they may exploit workers in ways that are outright abusive, such as the [Sumangali](https://en.wikipedia.org/wiki/Sumangali_%28child_labour%29) form of child labor. This is why we need labor protection laws and state intervention—to prevent such exploitation. The key point is that, without regulation, capitalism will inevitably prioritize profit over people.
Neoliberals, on the other hand, rightly point out that relying on the state has its own set of problems. Government inefficiency compared to the private sector, [pork-barrel spending](https://en.wikipedia.org/wiki/Pork_barrel), myopic policies resulting from short election cycles, or corruption of the political process through [revolving doors](https://en.wikipedia.org/wiki/Revolving_door_%28politics%29) and ultimately [regulatory capture](https://en.wikipedia.org/wiki/Regulatory_capture) can all have harmful effects. Neoliberals argue that market forces can be a powerful tool for generating prosperity, and that these issues should not be overlooked.[^1]
[^1]: To be honest, my personal experience has been that neoliberals are able to acknowledge that capitalism is an imperfect systems, but still defend that it's a better system than others humanity has tried. Progressives can also be more or less nuancedly anti-market. So the above two paragraphs are just a quick sketch.
So each side is skilled at articulating its opponent's weakeness, but is not as able to put forward an impregnable case for its own position. This stalemate is reminiscent of early presocratic philosophy. Thale and Anaximenes, for example, both put forth theories about the elemental composition of the universe—one claiming everything was made of water, the other asserting everything was made of air. Like politics today, each could see that the other was wrong, but they couldn't convince the other side.
The root of the problem is that both sides of this debate are in the wrong. Each side contains an [inherent rhetorical contradiction](https://en.wikipedia.org/wiki/Aporia#Definitions): they advocate for an imperfect mechanism that doesn't always prioritize human flourishing, while not realizing the degree to which it will not. This is my diagnosis for the current political stalemate.
![Three arrows pointing in roughly the same direction, but with small differences. Two arrows represent two ideologies, another represents human flourishing. Although the human flourishing arrows and the ideology arrows seem close, they eventually start to diverge.](https://i.imgur.com/tlCqq4q.png)
### Proposed solutions
At the individual level, you could consciously avoid actions that make society shittier, like refusing job opportunities for a tobacco companies, or personally resisting creating irritating advertisements. You could also advocate for stronger social norms that discourage others from engaging in similar behaviors.
At the societal level, the solution is less clear, and I think requires some ideation. Some solutions which I brainstormed were to:
- Rely on political figures who can bridge ideological divides and understand the importance of multiple perspectives.
- Come up with better mechanisms to align governments and human flourishing
- Come up with better mechanisms to align markets and human flourishing
- Periodically re-founding states and institutions to rid them of ideological cruft and realign them with human flourishing
- Have more innovation around forms of government, perhaps within smaller states or with entities such as [Próspera](https://prospera.hn)
- Read the literature on [regulatory economics](https://en.wikipedia.org/wiki/Regulatory_economics) and related areas.
The Effective Altruism (EA) movement doesn't have great answers here. It suggests to rigorously rank problems in the world and start working on them in order of importance. But this isn't a powerful enough answer to run a state. Other ideologies offer different embryonic approaches of how to proceed. The vision sketched in [State Capacity Libertarianism](https://marginalrevolution.com/marginalrevolution/2020/01/what-libertarianism-has-become-and-will-become-state-capacity-libertarianism.html) presents a libertarianism tweaked to deal with problems requiring large amounts of coordination, but is lacking in detail. My sense is also that some recent feminist/progressive anthropology and social science also has the aim of highlighting or conceiving new types of societies in whose image we would improve our own society.
All in all, the above analysis seems fairly rough. For one, it at time assumes that people share the same goals, and just differ in the mechanisms they think are best to achieve these common goals. Still, perhaps the above is clarifying to some readers.
<p><section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

@ -0,0 +1,11 @@
Example post with comments
==========================
Bla bla
---
<section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section>

@ -0,0 +1,15 @@
<form method="post" action="https://listmonk.nunosempere.com/subscription/form" class="listmonk-form">
<div>
<h3>Subscribe</h3>
<input type="hidden" name="nonce" />
<p><input type="email" name="email" required placeholder="E-mail" /></p>
<p><input type="text" name="name" placeholder="Name (optional)" /></p>
<p>
<input id="82ff8" type="checkbox" name="l" checked value="82ff889c-f9d9-4a45-bf9a-7e2696813021" />
<label for="82ff8">nunosempere.com</label>
</p>
<p><input type="submit" value="Subscribe" /></p>
</div>
</form>

@ -1,3 +1,13 @@
## Gossip
_2022/11/20_: I've updated my tally of EA funding @ [Some data on the stock of EA™ funding](https://nunosempere.com/blog/2022/11/20/brief-update-ea-funding/). It's possible that the comparison with the now absent FTX funding might be enlightening to readers.
![](https://i.imgur.com/RwD1pP9.png)
![](https://i.imgur.com/OZwHMtV.png)
_2022/10/31_: I've added an [email subscription option](https://nunosempere.com/.subscribe/) for this blog, as well as for [samotsvety.org](https://samotsvety.org/.subscribe/).
_2022/08/30_: Per [this blogpost](https://forum.effectivealtruism.org/posts/tekdQKdfFe3YJTwML/end-to-end-encryption-for-ea#comments), I'm making my public key available [here](https://nunosempere.com/gossip/.nuno-sempere-public-key.txt). I can be contacted at nuno dot sempere dot lh at protonmail dot com.
_2022/08/14_: Updated [blog rss](https://nunosempere.com/blog/index.rss) to use the correct date. Thanks, kind stranger who left a tip on my [Admonymous](https://www.admonymous.co/loki).

@ -1,4 +1,4 @@
I'm Nu&#xF1;o Sempere. I do research, write software and predict the future. This webpage is currently a work in progress.
I'm Nu&#xF1;o Sempere. I do research, write software, and predict the future.
<img src="https://i.imgur.com/rvwA0Wr.jpg" alt="image of myself" class="img-frontpage-center">
@ -6,10 +6,8 @@ I'm Nu&#xF1;o Sempere. I do research, write software and predict the future. Thi
1. [Estimating value](https://forum.effectivealtruism.org/s/AbrRsXM2PrCrPShuZ): How do we get to expected value calculations?
2. [Forecasting Newsletter](https://forecasting.substack.com/): What is up in the world of forecasting?
3. [Vantage points](https://forum.effectivealtruism.org/s/XbCaYR3QfDaeuJ4By): What do I notice when I take a step back?
4. [Metaforecast](https://metaforecast.org/): A search engine for probabilities
3. [Metaforecast](https://metaforecast.org/): A search engine for probabilities
### Readers might also enjoy...
...reading the [gossip](/gossip) page or seeing the latest on my [blog](https://nunosempere.com/blog/) to see what's new.
### Readers might also wish to...
...read the [gossip](/gossip) page, visit my [blog](/blog), or [sign up for my newsletter](/.subscribe).

Binary file not shown.

@ -1,212 +0,0 @@
https://nunosempere.com/blog/
https://nunosempere.com/blog/2019/
https://nunosempere.com/blog/2019/06/
https://nunosempere.com/blog/2019/06/13/
https://nunosempere.com/blog/2019/06/13/EA_Mental_Health/
https://nunosempere.com/blog/2019/10/
https://nunosempere.com/blog/2019/10/04/
https://nunosempere.com/blog/2019/10/04/Social_Movements/
https://nunosempere.com/blog/2019/10/10/
https://nunosempere.com/blog/2019/10/10/Shapley_Values_I/
https://nunosempere.com/blog/2019/12/
https://nunosempere.com/blog/2019/12/19/
https://nunosempere.com/blog/2019/12/19/Amplification_I/
https://nunosempere.com/blog/2019/12/20/
https://nunosempere.com/blog/2019/12/20/Amplification_II/
https://nunosempere.com/blog/2020/
https://nunosempere.com/blog/2020/01/
https://nunosempere.com/blog/2020/01/15/
https://nunosempere.com/blog/2020/01/15/MIT_EDX_Review/
https://nunosempere.com/blog/2020/03/
https://nunosempere.com/blog/2020/03/01/
https://nunosempere.com/blog/2020/03/01/Survey_Making/
https://nunosempere.com/blog/2020/03/10/
https://nunosempere.com/blog/2020/03/10/Shapley_Values_II/
https://nunosempere.com/blog/2020/04/
https://nunosempere.com/blog/2020/04/01/
https://nunosempere.com/blog/2020/04/01/International_Supply_Chain/
https://nunosempere.com/blog/2020/04/30/
https://nunosempere.com/blog/2020/04/30/Forecasting_Newsletter_2020_04/
https://nunosempere.com/blog/2020/05/
https://nunosempere.com/blog/2020/05/31/
https://nunosempere.com/blog/2020/05/31/forecasting-newsletter-may-2020/
https://nunosempere.com/blog/2020/07/
https://nunosempere.com/blog/2020/07/01/
https://nunosempere.com/blog/2020/07/01/forecasting-newsletter-june-2020/
https://nunosempere.com/blog/2020/08/
https://nunosempere.com/blog/2020/08/01/
https://nunosempere.com/blog/2020/08/01/forecasting-newsletter-july-2020/
https://nunosempere.com/blog/2020/09/
https://nunosempere.com/blog/2020/09/01/
https://nunosempere.com/blog/2020/09/01/forecasting-newsletter-august-2020/
https://nunosempere.com/blog/2020/10/
https://nunosempere.com/blog/2020/10/01/
https://nunosempere.com/blog/2020/10/01/forecasting-newsletter-september-2020/
https://nunosempere.com/blog/2020/11/
https://nunosempere.com/blog/2020/11/01/
https://nunosempere.com/blog/2020/11/01/forecasting-newsletter-october-2020/
https://nunosempere.com/blog/2020/11/10/
https://nunosempere.com/blog/2020/11/10/incentive-problems-with-current-forecasting-competitions/
https://nunosempere.com/blog/2020/11/15/
https://nunosempere.com/blog/2020/11/15/announcing-the-forecasting-innovation-prize/
https://nunosempere.com/blog/2020/11/22/
https://nunosempere.com/blog/2020/11/22/predicting-the-value-of-small-altruistic-projects-a-proof-of/
https://nunosempere.com/blog/2020/12/
https://nunosempere.com/blog/2020/12/01/
https://nunosempere.com/blog/2020/12/01/an-experiment-to-evaluate-the-value-of-one-researcher-s-work/
https://nunosempere.com/blog/2020/12/01/forecasting-newsletter-november-2020/
https://nunosempere.com/blog/2020/12/03/
https://nunosempere.com/blog/2020/12/03/what-are-good-rubrics-or-rubric-elements-to-evaluate-and/
https://nunosempere.com/blog/2020/12/25/
https://nunosempere.com/blog/2020/12/25/big-list-of-cause-candidates/
https://nunosempere.com/blog/2021/
https://nunosempere.com/blog/2021/01/
https://nunosempere.com/blog/2021/01/01/
https://nunosempere.com/blog/2021/01/01/forecasting-newsletter-december-2020/
https://nunosempere.com/blog/2021/01/10/
https://nunosempere.com/blog/2021/01/10/2020-forecasting-in-review/
https://nunosempere.com/blog/2021/01/13/
https://nunosempere.com/blog/2021/01/13/a-funnel-for-cause-candidates/
https://nunosempere.com/blog/2021/02/
https://nunosempere.com/blog/2021/02/01/
https://nunosempere.com/blog/2021/02/01/forecasting-newsletter-january-2021/
https://nunosempere.com/blog/2021/02/19/
https://nunosempere.com/blog/2021/02/19/forecasting-prize-results/
https://nunosempere.com/blog/2021/03/
https://nunosempere.com/blog/2021/03/01/
https://nunosempere.com/blog/2021/03/01/forecasting-newsletter-february-2021/
https://nunosempere.com/blog/2021/03/07/
https://nunosempere.com/blog/2021/03/07/introducing-metaforecast-a-forecast-aggregator-and-search/
https://nunosempere.com/blog/2021/03/16/
https://nunosempere.com/blog/2021/03/16/relative-impact-of-the-first-10-ea-forum-prize-winners/
https://nunosempere.com/blog/2021/04/
https://nunosempere.com/blog/2021/04/01/
https://nunosempere.com/blog/2021/04/01/forecasting-newsletter-march-2021/
https://nunosempere.com/blog/2021/05/
https://nunosempere.com/blog/2021/05/01/
https://nunosempere.com/blog/2021/05/01/forecasting-newsletter-april-2021/
https://nunosempere.com/blog/2021/06/
https://nunosempere.com/blog/2021/06/01/
https://nunosempere.com/blog/2021/06/01/forecasting-newsletter-may-2021/
https://nunosempere.com/blog/2021/06/16/
https://nunosempere.com/blog/2021/06/16/2018-2019-long-term-future-fund-grantees-how-did-they-do/
https://nunosempere.com/blog/2021/06/16/what-should-the-norms-around-privacy-and-evaluation-in-the/
https://nunosempere.com/blog/2021/06/24/
https://nunosempere.com/blog/2021/06/24/shallow-evaluations-of-longtermist-organizations/
https://nunosempere.com/blog/2021/07/
https://nunosempere.com/blog/2021/07/01/
https://nunosempere.com/blog/2021/07/01/forecasting-newsletter-june-2021/
https://nunosempere.com/blog/2021/08/
https://nunosempere.com/blog/2021/08/01/
https://nunosempere.com/blog/2021/08/01/forecasting-newsletter-july-2021/
https://nunosempere.com/blog/2021/09/
https://nunosempere.com/blog/2021/09/01/
https://nunosempere.com/blog/2021/09/01/forecasting-newsletter-august-2021/
https://nunosempere.com/blog/2021/09/01/frank-feedback-given-to-very-junior-researchers/
https://nunosempere.com/blog/2021/09/20/
https://nunosempere.com/blog/2021/09/20/building-blocks-of-utility-maximization/
https://nunosempere.com/blog/2021/10/
https://nunosempere.com/blog/2021/10/01/
https://nunosempere.com/blog/2021/10/01/forecasting-newsletter-september-2021/
https://nunosempere.com/blog/2021/10/22/
https://nunosempere.com/blog/2021/10/22/an-estimate-of-the-value-of-metaculus-questions/
https://nunosempere.com/blog/2021/11/
https://nunosempere.com/blog/2021/11/02/
https://nunosempere.com/blog/2021/11/02/forecasting-newsletter-october-2021/
https://nunosempere.com/blog/2021/11/08/
https://nunosempere.com/blog/2021/11/08/a-model-of-patient-spending-and-movement-building/
https://nunosempere.com/blog/2021/11/15/
https://nunosempere.com/blog/2021/11/15/simple-comparison-polling-to-create-utility-functions/
https://nunosempere.com/blog/2021/11/25/
https://nunosempere.com/blog/2021/11/25/pathways-to-impact-for-forecasting-and-evaluation/
https://nunosempere.com/blog/2021/12/
https://nunosempere.com/blog/2021/12/02/
https://nunosempere.com/blog/2021/12/02/forecasting-newsletter-november-2021/
https://nunosempere.com/blog/2021/12/13/
https://nunosempere.com/blog/2021/12/13/external-evaluation-of-the-ea-wiki/
https://nunosempere.com/blog/2021/12/31/
https://nunosempere.com/blog/2021/12/31/prediction-markets-in-the-corporate-setting/
https://nunosempere.com/blog/2022/
https://nunosempere.com/blog/2022/01/
https://nunosempere.com/blog/2022/01/10/
https://nunosempere.com/blog/2022/01/10/forecasting-newsletter-december-2021/
https://nunosempere.com/blog/2022/01/27/
https://nunosempere.com/blog/2022/01/27/forecasting-newsletter-looking-back-at-2021/
https://nunosempere.com/blog/2022/02/
https://nunosempere.com/blog/2022/02/03/
https://nunosempere.com/blog/2022/02/03/forecasting-newsletter-january-2022/
https://nunosempere.com/blog/2022/02/06/
https://nunosempere.com/blog/2022/02/06/splitting-the-timeline-as-an-extinction-risk-intervention/
https://nunosempere.com/blog/2022/02/08/
https://nunosempere.com/blog/2022/02/08/we-are-giving-usd10k-as-forecasting-micro-grants/
https://nunosempere.com/blog/2022/02/18/
https://nunosempere.com/blog/2022/02/18/five-steps-for-quantifying-speculative-interventions/
https://nunosempere.com/blog/2022/03/
https://nunosempere.com/blog/2022/03/05/
https://nunosempere.com/blog/2022/03/05/forecasting-newsletter-february-2022/
https://nunosempere.com/blog/2022/03/10/
https://nunosempere.com/blog/2022/03/10/samotsvety-nuclear-risk-forecasts-march-2022/
https://nunosempere.com/blog/2022/03/17/
https://nunosempere.com/blog/2022/03/17/valuing-research-works-by-eliciting-comparisons-from-ea/
https://nunosempere.com/blog/2022/04/
https://nunosempere.com/blog/2022/04/01/
https://nunosempere.com/blog/2022/04/01/forecasting-newsletter-april-2222/
https://nunosempere.com/blog/2022/04/05/
https://nunosempere.com/blog/2022/04/05/forecasting-newsletter-march-2022/
https://nunosempere.com/blog/2022/04/06/
https://nunosempere.com/blog/2022/04/06/note-donations/
https://nunosempere.com/blog/2022/04/07/
https://nunosempere.com/blog/2022/04/07/openphil-allocation/
https://nunosempere.com/blog/2022/04/16/
https://nunosempere.com/blog/2022/04/16/optimal-scoring/
https://nunosempere.com/blog/2022/04/17/
https://nunosempere.com/blog/2022/04/17/simple-squiggle/
https://nunosempere.com/blog/2022/05/
https://nunosempere.com/blog/2022/05/01/
https://nunosempere.com/blog/2022/05/01/ea-forum-lowdown-april-2022/
https://nunosempere.com/blog/2022/05/10/
https://nunosempere.com/blog/2022/05/10/forecasting-newsletter-april-2022/
https://nunosempere.com/blog/2022/05/20/
https://nunosempere.com/blog/2022/05/20/infinite-ethics-101/
https://nunosempere.com/blog/2022/06/
https://nunosempere.com/blog/2022/06/03/
https://nunosempere.com/blog/2022/06/03/forecasting-newsletter-may-2022/
https://nunosempere.com/blog/2022/06/14/
https://nunosempere.com/blog/2022/06/14/the-tragedy-of-calisto-and-melibea/
https://nunosempere.com/blog/2022/06/16/
https://nunosempere.com/blog/2022/06/16/criminal-justice/
https://nunosempere.com/blog/2022/07/
https://nunosempere.com/blog/2022/07/04/
https://nunosempere.com/blog/2022/07/04/cancellation-insurance/
https://nunosempere.com/blog/2022/07/05/
https://nunosempere.com/blog/2022/07/05/i-will-bet-on-your-success-or-failure/
https://nunosempere.com/blog/2022/07/09/
https://nunosempere.com/blog/2022/07/09/maximum-vindictiveness-strategy/
https://nunosempere.com/blog/2022/07/12/
https://nunosempere.com/blog/2022/07/12/forecasting-newsletter-june-2022/
https://nunosempere.com/blog/2022/07/23/
https://nunosempere.com/blog/2022/07/23/thoughts-on-turing-julia/
https://nunosempere.com/blog/2022/07/27/
https://nunosempere.com/blog/2022/07/27/how-much-to-run-to-lose-20-kilograms/
https://nunosempere.com/blog/2022/08/
https://nunosempere.com/blog/2022/08/04/
https://nunosempere.com/blog/2022/08/04/usd1-000-squiggle-experimentation-challenge/
https://nunosempere.com/blog/2022/08/08/
https://nunosempere.com/blog/2022/08/08/forecasting-newsletter-july-2022/
https://nunosempere.com/blog/2022/08/10/
https://nunosempere.com/blog/2022/08/10/evolutionary-anchor/
https://nunosempere.com/blog/2022/08/18/
https://nunosempere.com/blog/2022/08/18/what-do-americans-mean-by-cutlery/
https://nunosempere.com/blog/2022/08/20/
https://nunosempere.com/blog/2022/08/20/fermi-introduction/
https://nunosempere.com/blog/2022/08/31/
https://nunosempere.com/blog/2022/08/31/on-cox-s-theorem-and-probabilistic-induction/
https://nunosempere.com/forecasting/
https://nunosempere.com/gossip/
https://nunosempere.com/misc/
https://nunosempere.com/misc/poetry/
https://nunosempere.com/misc/test/
https://nunosempere.com/misc/test/test-git/
https://nunosempere.com/misc/test/test-git/README
https://nunosempere.com/research/
https://nunosempere.com/software/
Loading…
Cancel
Save