master
Nuno Sempere 10 months ago
parent b02b9346b4
commit 5b2439711c

@ -23,5 +23,5 @@ There previously was a form here, but I think someone was inputting random email
</form>
<p>
The reason why I am asking for subscribers' names is explained <a href="https://nunosempere.com/.subscribe/why-name">here</a>.
The reason why I am asking for subscribers' names is explained <a href="https://nunosempere.com/.subscribe/why-name">here</a>. Frequency is roughly once a week.
</p>

@ -7,7 +7,7 @@ Following up on [Simple estimation examples in Squiggle](https://forum.effectiv
As well as in the [playground](https://www.squiggle-language.com/playground), Squiggle can also be used inside [VS Code](https://code.visualstudio.com/), after one installs [this extension](https://github.com/quantified-uncertainty/squiggle/tree/develop/packages/vscode-ext), following the instructions [here](https://github.com/quantified-uncertainty/squiggle/blob/develop/packages/vscode-ext/README.md). This is more convenient when working with more advanced models because models can be more quickly saved, and the overall experience is nicer.
<img src='https://i.imgur.com/ldmrmmX.png' class='.img-medium-center'>
<img src='https://images.nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/squiggle-vscode.png' class='.img-medium-center'>
## Models
@ -49,11 +49,11 @@ xriskThroughAps(t) = advancedPowerSeekingAIBy(t) * xriskIfAPS(t)
This produces the cumulative and instantaneous probability of “advanced power-seeking AI” by/at each point in time:
<img src='https://i.imgur.com/7wlIKtp.png' class='.img-medium-center'>
<img src='https://images.nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/squiggle-charts-agi.png' class='.img-medium-center'>
And then, assuming a constant 95% probability of x-risk given advanced power-seeking AGI, we can get the probability of such risk by every point in time:
<img src='https://i.imgur.com/hPuGhUE.png' class='.img-medium-center'>
<img src='https://images.nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/probability-ai-xrisk-at-each-point-in-time.png' class='.img-medium-center'>
Now, the fun is that the x-risk is in fact not constant. If AGI happened tomorrow wed be much less prepared than if it happens in 70 years, and a better model would incorporate that. 
@ -64,7 +64,7 @@ For individual forecasts, rather than for models which combine different forecas
In the [preceding post](https://forum.effectivealtruism.org/posts/vh3YvCKnCBp6jDDFd/simple-estimation-examples-in-squiggle#Expected_value_for_a_list_of_things__complexity___2_10_), I presented some quick relative estimates for possible career pathways. Shortly after that, Benjamin Todd reached out about estimating the value of various career pathways he was considering. As a result, I created [this more complicated spreadsheet](https://docs.google.com/spreadsheets/d/1QATMTzLUdmxBqD2snhiAkH-_KvwbhGdlYaU8Ho7kjDY/edit?usp=sharing):
 
<img src='https://i.imgur.com/MT1aVtk.png' class='.img-medium-center'>
<img src='https://images.nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/benjamin-todd.png' class='.img-medium-center'>
You can see a higher quality version of this image here: <[https://i.imgur.com/hvq0SeM.png](https://www.google.com/url?q=https://i.imgur.com/hvq0SeM.png&sa=D&source=docs&ust=1665398336469760&usg=AOvVaw1SMBMKFOfTRKgaOAsxR9np)\>
@ -103,13 +103,12 @@ valueOfInterventionInPopulation(num_beneficiaries, population_age_distribution,
Then we are saying that we are reaching 1000 people, whose age distribution looks like this:
<img src='https://i.imgur.com/GvyMyqW.png' class='.img-medium-center'>
This could use a bit more work to resemble an actual population pyramid.
<img src='https://images.nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/population-pyramid.png' class='.img-medium-center'>
<figcaption>This could use a bit more work to resemble an actual population pyramid.</figcaption>
and that the benefit is just the remaining life expectancy. This produces the following estimate, in person-years:
<img src='https://i.imgur.com/BSbneRi.png' class='.img-medium-center'>
<img src='https://images.nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/amf-person-years.png' class='.img-medium-center'>
But the assumptions we have used arent very realistic. We are essentially assuming that we are creating clones of people at different ages, and that they wouldnt die until the end of their natural 40 to 50-year lifespan.
@ -203,7 +202,7 @@ That is, we are modelling this example intervention of halving child mortality,
But for reference, the distributions impact looks as follows:
<img src="https://i.imgur.com/bejosHk.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/probability-ai-xrisk-at-each-point-in-time.png" class='.img-medium-center'>
### Calculate optimal allocation given diminishing marginal values
@ -267,7 +266,7 @@ The code is a bit too large to simply paste into an EA Forum post, but it can be
We can also look at the impact that various interventions have on our toy world, with further details [here](https://docs.google.com/spreadsheets/d/1WnplTYJJMeh0zXVUTPBaihE7n1kneW5LDidLvJcGcv4/edit?usp=sharing):
<img src='https://i.imgur.com/HCg2g5r.png' class='.img-medium-center'>
<img src='https://images.nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models/toy-world.png' class='.img-medium-center'>
We see that of the sample interventions, increasing population growth by 0.5% has the highest impact. But 0.5%/year is a pretty large amount, and it would be pretty difficult to engineer. So further work could look at the relative difficulty of each of those interventions. Still, that table may serve to make a qualitative argument that interventions such as increasing population growth, economic growth, or reducing existential risk, are probably more valuable than directly increasing consumption.

@ -5,7 +5,7 @@ Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind
## New API
Our most significant new addition is our GraphQL API. It allows other people to build on top of our efforts. It can be accessed on [metaforecast.org/api/graphql](https://metaforecast.org/api/graphql), and looks similar to the EA Forum's [own graphql api](https://forum.effectivealtruism.org/graphiql).<p><img src="https://i.imgur.com/xHRBMNb.png" class='.img-medium-center'></p>
Our most significant new addition is our GraphQL API. It allows other people to build on top of our efforts. It can be accessed on [metaforecast.org/api/graphql](https://metaforecast.org/api/graphql), and looks similar to the EA Forum's [own graphql api](https://forum.effectivealtruism.org/graphiql).<p><img src="https://images.nunosempere.com/blog/2022/11/04/metaforecast-late-2022-update/graphql.png" class='.img-medium-center'></p>
To get the first 1000 questions, you could use a query like: 
@ -43,15 +43,15 @@ You can find more examples, like code to download all questions, in our [/scrip
Charts display a question's history. They look as follows:
<img src="https://i.imgur.com/MWDA1j7.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2022/11/04/metaforecast-late-2022-update/question-graph.png" class='.img-medium-center'>
Charts can be accessed by clicking the expand button on the front page although they are fairly slow to load at the moment.
<img src="https://i.imgur.com/JJCrUjn.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2022/11/04/metaforecast-late-2022-update/expand-button.png" class='.img-medium-center'>
Clicking on the expand button brings the user to a question page, which contains a chart, the full question description, and a range of quality indicators:
<img src="https://i.imgur.com/tlsVqz1.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2022/11/04/metaforecast-late-2022-update/question-page.png" class='.img-medium-center'>
We are also providing an endpoint at _metaforecast.org/questions/embed/\[id\]_ to allow other pages to embed our charts. For instance, to embed a question whose id is _betfair-1.178163916_, the endpoint would be [here](https://metaforecast.org/questions/embed/betfair-1.178163916). One would use it in the following code: 
@ -72,7 +72,7 @@ With time, we aim to improve these pages, make them more interactive, etc. We al
Dashboards are collections of questions. For instance, [here](https://metaforecast.org/dashboards/view/561472e0d2?numCols=2) is a dashboard on global markets and inflation, as embedded in [Global Guessing](https://globalguessing.com/russia-ukraine-forecasts/).
<img src="https://i.imgur.com/Joid0LI.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2022/11/04/metaforecast-late-2022-update/dashboard.png" class='.img-medium-center'>
Like questions, you can either [view dashboards directly](http://metaforecast.org/dashboards/view/561472e0d2?numCols=2), or [embed](http://metaforecast.org/dashboards/embed/561472e0d2?numCols=2) them. You can also create them, at [https://metaforecast.org/dashboards](https://metaforecast.org/dashboards).
@ -88,7 +88,7 @@ Metaforecast is also open source, and we welcome contributions. You can see some
## Acknowledgements
<p><img src="https://i.imgur.com/7yuRrge.png" class="img-frontpage-center"></p>
<p><img src="https://images.nunosempere.com/quri/logo.png" class="img-frontpage-center" style="width: 20%"></p>
Metaforecast is hosted by the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/), and has received funding from [Astral Codex Ten](https://astralcodexten.substack.com/p/acx-grants-results). It has received significant contributions from [Vyacheslav Matyuhin](https://berekuk.ru/), who was responsible for the upgrade to Typescript and GraphQL. Thanks to Clay Graubard of [Global Guessing](https://globalguessing.com/) for their comments and dashboards, to Insight Prediction for help smoothing out their API, to Nathan Young for general comments, to others for their comments and suggestions.

@ -5,7 +5,7 @@ This list of forecasting organizations includes:
* A brief description of each organization
* A monetary estimate of value. This can serve as a rough but hard-to-fake proxy of value. Sometimes this is a flow (e.g., budget per year), and sometimes this is an estimate of total value (e.g., valuation).
* A more subjective, rough, and verbal estimate of how much value the organization produces. <p><figure><img src="https://i.imgur.com/gqCTHMq.png" class="img-frontpage-center"><br><figcaption>DALLE: "crystal ball surrounded by money, photorealistic"</figcaption></figure></p>
* A more subjective, rough, and verbal estimate of how much value the organization produces. <p><figure><img src="https://images.nunosempere.com/blog/2022/11/06/forecasting-money-flows/crystall-ball.png" class="img-frontpage-center"><br><figcaption>DALLE: "crystal ball surrounded by money, photorealistic"</figcaption></figure></p>
This started as a breadth first evaluation of the forecasting system, and to some extent it still is, i.e., it might be useful to get a rough sense of the ecosystem as a whole. After some discussion on whether very rough evaluations [are worth it](https://nunosempere.com/blog/2022/10/27/are-flimsy-evaluations-worth-it/) ([a](http://web.archive.org/web/20221031125900/https://nunosempere.com/blog/2022/10/27/are-flimsy-evaluations-worth-it/)), people who prefer their evaluations to have a high threshold of quality and polish might want to either ignore this post or just pay attention to the monetary estimates.

@ -39,7 +39,7 @@ Metaculus [erroneously resolved](https://nitter.it/daniel_eth/status/15768425032
An edition of the [Manifold Markets newsletter](https://news.manifold.markets/p/above-the-fold-visualising-market) ([a](https://web.archive.org/web/20221115163946/https://news.manifold.markets/p/above-the-fold-visualising-market)) includes this neat visualization of a group of markets through time:
[<img src="https://i.imgur.com/36ev880.gif" class='.img-medium-center'>](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6a94b58-5e91-4645-ac9d-51465d75cd84_1440x450.gif)
<img src="https://images.nunosempere.com/blog/2022/11/15/forecasting-newsletter-for-october-2022/manifold.gif" class='.img-medium-center'>
Manifold's [newsletter](https://news.manifold.markets/p/above-the-fold-visualising-market) ([a](https://web.archive.org/web/20221115163946/https://news.manifold.markets/p/above-the-fold-visualising-market)) also has further updates, including on their bot for Twitch. They continue to have a high development speed.
@ -69,7 +69,7 @@ The [$5k challenge to quantify the impact of 80,000 hours' top career paths](htt
Katja Grace looks at her [calibration in 1000 predictions](https://worldspiritsockpuppet.substack.com/p/calibration-of-a-thousand-predictions) ([a](https://web.archive.org/web/20221115164233/https://worldspiritsockpuppet.substack.com/p/calibration-of-a-thousand-predictions)):
[<img src="https://i.imgur.com/FHAGMbl.png" class='.img-medium-center'>](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F65bf5450-edef-4e2d-bd1e-c8ee9fbca01a_648x630.png)
<img src="https://images.nunosempere.com/blog/2022/11/15/forecasting-newsletter-for-october-2022/calibration-katja-grace.png" class='.img-medium-center'>
Callum McDougall writes [Six (and a half) intuitions for KL divergence](https://www.perfectlynormal.co.uk/blog-kl-divergence) ([a](https://web.archive.org/web/20221112073956/https://www.perfectlynormal.co.uk/blog-kl-divergence)).
@ -83,7 +83,7 @@ I came across this really neat explanation of Markov Chain Monte Carlo: [Markov
Sam Nolan &co create estimates [explicitly quantifying](https://forum.effectivealtruism.org/posts/Nb2HnrqG4nkjCqmRg/quantifying-uncertainty-in-givewell-ceas) ([a](https://archive.ph/0kY8q)) the uncertainty in GiveWells cost-effectiveness analyses.
[<img src="https://i.imgur.com/1VifplX.png" class='.img-medium-center'>](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F93e54c3d-f5df-495a-90e8-d3dc3482e2ef_1462x567.png)
<img src="https://images.nunosempere.com/blog/2022/11/15/forecasting-newsletter-for-october-2022/probability-distributions-givewell.png" class='.img-medium-center'>
---

@ -1,11 +1,19 @@
People's choices determine a partial ordering over people's desirability
========================================================================
Consider:
Consider the following relationship:
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
$$ \delta_{i}(a, b) = \begin{cases} -1 \text{ if person i has chosen person b over person a} \\ 0 \text{ if person i has not made a choice between person a and person b} \\ 1 \text{ if person i has chosen person a over person b} \end{cases} $$
$$ \delta_{i}(a, b) = \begin{cases} -1 \text{ if person i has chosen person b over person a} \\ 0 \text{ if person i has not made a choice between person a and person b} \\ 1 \text{ if person i has chosen person a over person b} \end{cases} $$</div>
$$ a \le b \text{ iff } \sum_{i} \delta_{i}(a, b) < 0$$
### Is this a partial ordering?

@ -0,0 +1,35 @@
Betting and consent
===================
There is an interesting thing around consent and betting:
- Clueless people can't give fully informed consent around taking some bets I offer,
- because if they were fully informed, they wouldn't make the bet, because they would know that I'm a few levels above them in terms of calibration and forecast accuracy.
- But on the other hand, they can only acquire the "fully informed" state about their own bullshitting after losing the bet,
- because once you lose money it is much harder to spin up rationalizations.
This post was prompted by a few situations in the past:
1. Offering a bet to a fellow forecaster at my forecasting group, [Samotsvety](https://samotsvety.org/) (I feel totally fine with this)
2. Offering a bet to [Daniel Filan](https://danielfilan.com/bets)—someone I respect—where I think that he is overestimating something, and he feels that I am underestimating it (I feel totally fine with this)
3. Offering a bet to someone I know is a much worse forecaster than I am who claims to be 99% sure of something (I do this occasionally)
4. Offering to bet to someone who writes that ["I would bet all the money I have (literally not figuratively)"](https://forum.effectivealtruism.org/posts/xBfp3HGapGycuSa6H/i-m-less-approving-of-the-ea-community-now-than-before-the?commentId=Gafn8GSkTge6q6ezf#Gafn8GSkTge6q6ezf) (I offered a specific bet, and would have gone through with the bet, even though loosing it would have been a major inconvenience to the person).
- Incidentally, they later backed down, which I still can't get over.
5. Offering a bet to a believer in the QAnon conspiracy theory that Trump would not be reinstated after his 2020 loss, where I know the QAnon conspiracy theorist personally (I feel fine with this, have won a bit over 500 EUR)
6. Taking a bet that Trump would not be reinstated as president in 2021 after Biden won, against nameless conspiracy theorists on Polymarket (have done this, feel fine)
7. Offering to bet to someone who exhibited some symptoms of a manic episode (offered the bet, accepted the bet, then dissolved the bet once they lost)
8. Offering to bet money to someone who had some symptoms of schizophrenia that they would not come up with a revolutionary math idea (considered the bet, but did not make it. I plausibly should have, since they keep believing in a similar thing three years afterwards.)
So what I am saying to someone when I offer them a bet is:
> I like you. However, I think what you just said is bullshit, or at least uncalibrated. And my reaction to that is to attempt to extract money from you in a way which I think will leave you with less money but with better models about the world and about your own fallibility. Now, it may be that I am wrong, in which case you can take advantage of me, but I think that this is the less likely outcome.
I think that ultimately, I do feel generally fine making bets with people, even if I'm a bit conflicted. In particular, I think that being me being known to offer accurate probabilities as ellicited by bets is a useful thing to offer even to people in altered mental states. But in the past I've been hesitant to do that (see points 5 and 6 in the list above). In the future, I'll experiment with sending this post to people that I think are in a position like points 3-8 in the list, before I make a bet with them.
I'm curious to get people's impressions on this. [Here](https://viewpoints.xyz/polls/betting-and-consent) is a small poll, and comments below are open.
<p>
<section id='isso-thread'>
<noscript>Javascript needs to be activated to view comments.</noscript>
</section>
</p>

@ -10,4 +10,4 @@ I'm Nu&#xF1;o Sempere. I [do research](https://quantifieduncertainty.org/), [wri
### Readers might also wish to...
...read the [gossip](/gossip) page, visit my [blog](/blog), or [sign up for my newsletter](/.subscribe).
...read the [gossip](/gossip) page, visit my [blog](/blog), or [sign up for my newsletter](/.newsletter)

Loading…
Cancel
Save