Compare commits

..

2 Commits

Author SHA1 Message Date
83cd0c438e feat: add recent changes 2023-02-04 16:25:42 +00:00
29ad7c3156 feat: add poem 2023-02-03 19:59:10 +00:00
39 changed files with 8554 additions and 13 deletions

View File

@ -6,7 +6,7 @@ There previously was a form here, but I think someone was inputting random email
-->
<form method="post" action="https://listmonk.nunosempere.com/subscription/form" class="listmonk-form">
<form method="post" action="https://list.nunosempere.com/subscription/form" class="listmonk-form">
<div>
<h3>Subscribe</h3>
<input type="hidden" name="nonce" />

View File

@ -264,6 +264,8 @@ This estimate could be improved by having numerical estiamates of impact. This w
Given FTX's current troubles, it's very possible that there will be less money floating around for altruistic forecasting projects. At the same time, some of the projects FTX has donated to may  prove successful, and other funders may want to step in.
PS: You can subscribe to posts from this blog [here](https://nunosempere.com/.subscribe/).
<p><section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,104 @@
## Libraries
library(ggplot2)
## Read data
setwd("/home/loki/Documents/core/ea/fresh/misc/ea-hbd") ## change to the folder in your computer
data <- read.csv("2020ssc_public.csv", header=TRUE, stringsAsFactors = FALSE)
## Restrict analysis to EAs
data_EAs <- data[data["EAID"] == "Yes",]
View(data_EAs)
n=dim(data_EAs)[1]
n
## Find biodiversity question
colnames(data_EAs)
colnames(data_EAs)[47]
## Process biodiversity question for EAs
tally <- list()
tally$options = c(1:5, "NA")
tally$count = sapply(tally$options, function(x){ sum(data_EAs[47] == x, na.rm = TRUE) })
tally$count[6] = sum(is.na(data_EAs[47]))
tally$count
tally = as.data.frame(tally)
tally
## Plot prevalence of belief within EA
titulo='Prevalence of attitudes towards "human biodiversity"\n amongst EA SlateStarCodex survey respondents in 2020'
subtitulo='"How would you describe your opinion of the the idea of "human biodiversity",\n eg the belief that races differ genetically in socially relevant ways?"\n (1 = very unfavorable, 5 = very favorable), n=993'
(ggplot(data = tally, aes(x =options, y = count)) +
geom_histogram(
stat="identity",
position= position_stack(reverse = TRUE),
fill="navyblue"
))+
scale_y_continuous(limits = c(0, 300))+
labs(
title=titulo,
subtitle=subtitulo,
x="answers",
y="answer count",
legend.title = element_blank(),
legend.text.align = 0
)+
theme(
legend.title = element_blank(),
plot.subtitle = element_text(hjust = 0.5),
plot.title = element_text(hjust = 0.5),
legend.position="bottom"
) +
geom_text(aes(label=count, size = 2), colour="#000000",size=2.5, vjust = -0.5)
height=5
width=height*(1+sqrt(5))/2
ggsave("q_hbd_EAs.png" , units="in", width=width, height=height, dpi=800)
## Process biodiversity question for all SSC respondents
tally2 <- list()
tally2$options = c(1:5, "NA")
tally2$count = sapply(tally2$options, function(x){ sum(data[47] == x, na.rm = TRUE) })
tally2$count[6] = sum(is.na(data[47]))
tally2$count
n=dim(data)[1]
n
tally2 = as.data.frame(tally2)
tally2
tally
## Plot
titulo='Prevalence of attitudes towards "human biodiversity"\n amongst all SlateStarCodex survey respondents in 2020'
subtitulo='"How would you describe your opinion of the the idea of "human biodiversity",\n eg the belief that races differ genetically in socially relevant ways?"\n (1 = very unfavorable, 5 = very favorable), n=7339'
(ggplot(data = tally2, aes(x =options, y = count)) +
geom_histogram(
stat="identity",
position= position_stack(reverse = TRUE),
fill="navyblue"
))+
scale_y_continuous(limits = c(0, 2000))+
labs(
title=titulo,
subtitle=subtitulo,
x="answers",
y="answer count",
legend.title = element_blank(),
legend.text.align = 0
)+
theme(
legend.title = element_blank(),
plot.subtitle = element_text(hjust = 0.5),
plot.title = element_text(hjust = 0.5),
legend.position="bottom"
) +
geom_text(aes(label=count, size = 2), colour="#000000",size=2.5, vjust = -0.5)
height=5
width=height*(1+sqrt(5))/2
ggsave("q_hbd_all.png" , units="in", width=width, height=height, dpi=800)

Binary file not shown.

After

Width:  |  Height:  |  Size: 361 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 349 KiB

View File

@ -0,0 +1,163 @@
Prevalence of belief in "human biodiversity" amongst self-reported EA respondents in the 2020 SlateStarCodex Survey
=====================================================================================================================
Note: This post presents some data which might inform downstream questions, rather than providing a fully cooked perspective on its own. For this reason, I have tried to not really express many opinions here. Readers might instead be interested in more fleshed out perspectives on the Bostrom affair, e.g., [here](https://rychappell.substack.com/p/text-subtext-and-miscommunication) in favor or [here](https://www.pasteurscube.com/why-im-personally-upset-with-nick-bostrom-right-now/) against.
## Graph
![](https://i.imgur.com/xYy9frR.png)
## Discussion
### Selection effects
I am not sure whether EAs who answered the EA forum are a representative sample of all EAs. It might not be, if SSC readers have shared biases and assumptions distinct from those of the EA population as a whole. That said, raw numerical numbers will be accurate, e.g., we can say that "at least 57 people who identified as EAs in 2020 strongly agree with the human biodiversity hypothesis".
### Question framing effects
I think the question as phrased is likely to *overestimate* belief in human biodiversity, because the phrasing seems somewhat inocuous, and in particular because "biodiversity" has positive mood affiliation. I think that fewer people would answer positively to a less inocuous sounding version, e.g., "How would you describe your opinion of the the idea of "human biodiversity",\n eg the belief that some races are genetically stupider than others? (1 = very unfavorable, 5 = very favorable)".
For a review of survey effects, see [A review of two books on survey-making](https://forum.effectivealtruism.org/posts/DCcciuLxRveSkBng2/a-review-of-two-books-on-survey-making).
### Interpreting as a probability
This isn't really all that meaningful, but we can assign percentages to each answer as follows:
- 1: 5%
- 2: 20%
- 3: 50%
- 4: 80%
- 5: 95%
- NA: 50%
The above requires a judgment call to assign probabilities to numbers in a Likert scale. In particular, I am making the judgment call that 1 and 5 correspond to 5% and 95%, rather than e.g., 0% and 100%, or 1% and 99%, based on my forecasting experience.
And then we can calculate an implicit probability as follows
```
( 174 * 0.03 + 227 * 0.2 + 288 * 0.5 + 175 * 0.8 + 57 * 0.95 + 22 * 0.5) / 993
```
The above calculation outputs 0.4025..., which, in a sense, means that SSC survey respondents which self-identified as EA assigned, as a whole, a 40% credence to the human biodiversity hypothesis.
### Comparison with all SSC respondents
![](https://i.imgur.com/h7vllAm.png)
## Code to replicate this
In an R runtime, run:
```
## Libraries
library(ggplot2)
## Read data
setwd("/home/loki/Documents/core/ea/fresh/misc/ea-hbd") ## change to the folder in your computer
data <- read.csv("2020ssc_public.csv", header=TRUE, stringsAsFactors = FALSE)
## Restrict analysis to EAs
data_EAs <- data[data["EAID"] == "Yes",]
View(data_EAs)
n=dim(data_EAs)[1]
n
## Find biodiversity question
colnames(data_EAs)
colnames(data_EAs)[47]
## Process biodiversity question for EAs
tally <- list()
tally$options = c(1:5, "NA")
tally$count = sapply(tally$options, function(x){ sum(data_EAs[47] == x, na.rm = TRUE) })
tally$count[6] = sum(is.na(data_EAs[47]))
tally$count
tally = as.data.frame(tally)
tally
## Plot prevalence of belief within EA
titulo='Prevalence of attitudes towards "human biodiversity"\n amongst EA SlateStarCodex survey respondents in 2020'
subtitulo='"How would you describe your opinion of the the idea of "human biodiversity",\n eg the belief that races differ genetically in socially relevant ways?"\n (1 = very unfavorable, 5 = very favorable), n=993'
(ggplot(data = tally, aes(x =options, y = count)) +
geom_histogram(
stat="identity",
position= position_stack(reverse = TRUE),
fill="navyblue"
))+
scale_y_continuous(limits = c(0, 300))+
labs(
title=titulo,
subtitle=subtitulo,
x="answers",
y="answer count",
legend.title = element_blank(),
legend.text.align = 0
)+
theme(
legend.title = element_blank(),
plot.subtitle = element_text(hjust = 0.5),
plot.title = element_text(hjust = 0.5),
legend.position="bottom"
) +
geom_text(aes(label=count, size = 2), colour="#000000",size=2.5, vjust = -0.5)
height=5
width=height*(1+sqrt(5))/2
ggsave("q_hbd_EAs.png" , units="in", width=width, height=height, dpi=800)
## Process biodiversity question for all SSC respondents
tally_all_ssc <- list()
tally_all_ssc$options = c(1:5, "NA")
tally_all_ssc$count = sapply(tally_all_ssc$options, function(x){ sum(data[47] == x, na.rm = TRUE) })
tally_all_ssc$count[6] = sum(is.na(data[47]))
tally_all_ssc$count
tally_all_ssc = as.data.frame(tally_all_ssc)
tally_all_ssc
tally
## Plot
titulo='Prevalence of attitudes towards "human biodiversity"\n amongst all SlateStarCodex survey respondents in 2020'
subtitulo='"How would you describe your opinion of the the idea of "human biodiversity",\n eg the belief that races differ genetically in socially relevant ways?"\n (1 = very unfavorable, 5 = very favorable), n=993'
(ggplot(data = tally_all_ssc, aes(x =options, y = count)) +
geom_histogram(
stat="identity",
position= position_stack(reverse = TRUE),
fill="navyblue"
))+
scale_y_continuous(limits = c(0, 2000))+
labs(
title=titulo,
subtitle=subtitulo,
x="answers",
y="answer count",
legend.title = element_blank(),
legend.text.align = 0
)+
theme(
legend.title = element_blank(),
plot.subtitle = element_text(hjust = 0.5),
plot.title = element_text(hjust = 0.5),
legend.position="bottom"
) +
geom_text(aes(label=count, size = 2), colour="#000000",size=2.5, vjust = -0.5)
height=5
width=height*(1+sqrt(5))/2
ggsave("q_hbd_all.png" , units="in", width=width, height=height, dpi=800)
```
The file 2020ssc_public.csv is no longer available in the [SSC blogpost](https://slatestarcodex.com/2020/01/20/ssc-survey-results-2020/), but it can easily be created from the .xlsx file, or I can make it available for a small donation to the AMF.
<p><section id='isso-thread'>
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

View File

@ -0,0 +1,50 @@
Interim Update on QURI's Work on EA Cause Area Candidates
======================================================
Originally published here: [https://quri.substack.com/p/interim-update-on-our-work-on-ea](https://quri.substack.com/p/interim-update-on-our-work-on-ea)
## The story so far:
- I constructed the original [Big List of Cause Candidates](https://forum.effectivealtruism.org/posts/SCqRu6shoa8ySvRAa/big-list-of-cause-candidates) in December 2020.
- I spent some time thinking about the [pipeline for new cause area ideas](https://forum.effectivealtruism.org/posts/iRA4Dd2bfX9nukSo3/a-funnel-for-cause-candidates), not all of which is posted.
- I tried to use a bounty system to update the list for next year but didn't succeed.
- I found a researcher, Leo, to update the list in [March 2022](https://forum.effectivealtruism.org/posts/DBhuERvKRgGpLiK6T/big-list-of-cause-candidates-january-2021-march-2022-update).
- In addition to the EA forum post, there is also an [Airtable](https://airtable.com/shrndjfwgDrv9eiYK) sheet, with filters for promisingness and other characteristics. This hasn't been updated since the original iteration in 2020. Initially, it was an experiment that could have been built upon, but it ended up being messier and thus abandoned.
![](https://i.imgur.com/ne6D4Ps.png)
## As of now:
- I think the list is a fine resource as it is, and I intend to make sure that it continues to be updated.
- It is now more apparent to me that integrating this list into structures that could use it is probably as important as producing and updating the list in the first place.
- Some stakeholders known to me are:
- Charity Entrepreneurship. Finding out whether they are in fact using it in their deliberations for deciding where to, and if it can be modified in order to be more useful to them, is probably a valuable step.
- Various "longtermist incubators". These have generally completely failed to get off the ground, with the notable exception of Nonlinear, which e.g., has [this well populated page with bounties](https://www.super-linear.org/).
- The Center for Effective Altruism uses the original post as part of its [Effective Altruism Handbook](https://forum.effectivealtruism.org/handbook#32FKXByGNgHLPaHnj). As the list becomes longer, its possible that a lighter version might be more suitable as introductory material.
- Possibly Open Philanthropy, though I doubt it.
- The Quantified Uncertainty Research Institute might not be an ideal institution for this, because the connection to forecasting and epistemics is a bit tenuous. But I think it will suffice for now.
- There is also a [Cause Prioritization Wiki](https://causeprioritization.org/), with partially overlapping content and aims. It might be a good move to either:
1. Move the Big List of Cause Candidates to that Wiki
2. Combine contents of both sources into a different system, e.g., an Airtable
The first step to decide this would be to create a list of options, with pros and cons for each one.
- [Super linear](https://www.super-linear.org/) is a new bounty portal that could be used to update this list and do related work in the upcoming year.
This concludes my thoughts for now.

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

View File

@ -0,0 +1,42 @@
There will always be a Voigt-Kampff test
========================================
In the film *Blade Runner*, the Voight-Kampff test is a fictional procedure used to distinguish androids from humans. In the normal course of events, humans and androids are pretty much indistiguishable, except when talking about very specific kinds of emotions and memories.
Similarly, as language models or image-producing neural networks continue to increase in size and rise in capabilities, it seems plausible that there will still be ways of identifying them as such.
<figure>
<img src="https://i.imgur.com/a6JjlQT.jpg" alt="Image produced by DALLE-2" class="img-frontpage-center">
<br><figcaption>Image produced by DALLE-2 with the prompt "Voight-Kampff test". Note the water mark at the bottom, as well as the inability to produce text.</figcaption>
</figure>
For example, for image models:
- They may have watermarks or stenographic messages which could be used to detect them
- They may have a bias towards particular types of prettiness or perfection
- They may not render certain complicated details, like hands, teeth, letters, etc.
- They may struggle with compositionality, light, consistency, etc.
And for language models:
- They may not have good models of things that humans don't often talk about, like intimate fears, shame, or the specific details of sexual attraction.
- They may not be up to the latest news, if they are only trained on events up to a certain point in the past.
- They may have a distinctly bland speech.
- They may have catchphrases or favour certain ways of expressing themselves
- They may struggle to produce original thoughts and ideas
- They may have idiosyncratic challenges, like not being able to decode ASCII art, not getting certain jokes, etc.
![](https://i.imgur.com/mSkUDyQ.png)
*From left to right: original historical image, image of myself, combination of the two produced using DALLE-2 to modify the jacket to also have a white shirt. This is a small-scale example of how the idiosyncrasies that allow us to unmask DALLE-2 do not matter to its ability to produce value: I like the third image a lot and I am using it on my social media profiles.*
But much like in the original *Blade Runner* movie, these details may not really matter for their economic impact, and the fact that a way exists at all of identifying them will be even less relevant. Similarly, the fact that DALLE-2 and other image models have difficulties correctly rendering teeth or objects in relationship to each other doesn't really reflect their current ability or future potential to replace many thousands of artists, and generally shape the demand curves of art.
I was thinking about this because I was recently forecasting on a question about "AGI", where "AGI" was defined as a system that: "is capable of passing adversarial Turing test against a top-5% human, who has access to experts." But such a system might take a really long time to be developed, even if the economic impact of an AI system is pretty great, because such a system might still have its own idiosyncrasies.
Ultimately, this makes me think that nitpicks and gotchas about ways to differentiate humans and machines aren't just all that relevant to predicting their future impact. What I care about is closer to the real-world impact of these machines.
That's all for now.
<p><section id='isso-thread'>
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

View File

@ -0,0 +1,281 @@
My highly personal skepticism braindump on existential risk from artificial intelligence.
==============
## Summary
This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like 
* selection effects at the level of which arguments are discovered and distributed
* community epistemic problems, and 
* increased uncertainty due to chains of reasoning with imperfect concepts 
as real and important. 
I still think that existential risk from AGI is important. But I dont view it as certain or close to certain, and I think that something is going wrong when people see it as all but assured. 
## Discussion of weaknesses
I think that this document was important for me personally to write up. However, I also think that it has some significant weaknesses:
1. There is some danger in verbalization leading to rationalization.
2. It alternates controversial points with points that are dead obvious.
3. It is to a large extent a reaction to my imperfectly digested understanding of a worldview pushed around the [ESPR](https://espr-camp.org/)/CFAR/MIRI/LessWrong cluster from 2016-2019, which nobody might hold now.
In response to these weaknesses:
1. I want to keep in mind that I do want to give weight to my gut feeling, and that I might want to update on a feeling of uneasiness rather than on its accompanying reasonings or rationalizations.
2. Readers might want to keep in mind that parts of this post may look like a [bravery debate](https://slatestarcodex.com/2013/05/18/against-bravery-debates/). But on the other hand, I've seen that the points which people consider obvious and uncontroversial vary from person to person, so I dont get the impression that there is that much I can do on my end for the effort that Im willing to spend.
3. Readers might want to keep in mind that actual AI safety people and AI safety proponents may hold more nuanced views, and that to a large extent I am arguing against a “Nuño of the past” view.
Despite these flaws, I think that this text was personally important for me to write up, and it might also have some utility to readers.
## Uneasiness about chains of reasoning with imperfect concepts
### Uneasiness about conjunctiveness
Its not clear to me how conjunctive AI doom is. Proponents will argue that it is very disjunctive, that there are lot of ways that things could go wrong. Im not so sure. 
In particular, when you see that a parsimonious decomposition (like Carlsmiths) tends to generate lower estimates, you can conclude:
1. That the method is producing a biased result, and trying to account for that
2. That the topic under discussion is, in itself, conjunctive: that there are several steps that need to  be satisfied. For example, “AI causing a big catastrophe” and “AI causing human exinction given that it has caused a large catastrophe” seem like they are two distinct steps that would need to be modelled separately,
I feel uneasy about only doing 1.) and not doing 2.) I think that the principled answer might be to split some probability into each case. Overall, though, Id tend to think that AI risk is more conjunctive than it is disjunctive 
I also feel uneasy about the social pressure in my particular social bubble. I think that the social pressure is for me to just accept [Nate Soares argument](https://www.lesswrong.com/posts/cCMihiwtZx7kdcKgt/comments-on-carlsmith-s-is-power-seeking-ai-an-existential) here that Carlsmiths method is biased, rather than to probabilistically incorporate it into my calculations. As in “oh, yes, people know that conjunctive chains of reasoning have been debunked, Nate Soares addressed that in a blogpost saying that they are biased”.
### I dont trust the concepts
My understanding is that MIRI and others work started in the 2000s. As such, their understanding of the shape that an AI would take doesnt particularly resemble current deep learning approaches. 
In particular, I think that many of the initial arguments that I most absorbed were motivated by something like an [AIXI](https://en.wikipedia.org/wiki/AIXI#:~:text=AIXI%20%5B'ai%CC%AFk%CD%A1si%CB%90%5D%20is%20a,2005%20book%20Universal%20Artificial%20Intelligence.) ([Somolonoff induction](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference) + some decision theory). Or, alternatively, by imagining what a very buffed-up [Eurisko](https://en.wikipedia.org/wiki/Eurisko) would look like. This seems to be like a fruitful generative approach which can _generate_ things that could go wrong, rather than _demonstrating_ that something will go wrong, or pointing to failures that we know will happen.
As deep learning attains more and more success, I think that some of the old concerns port over. But I am not sure which ones, to what extent, and in which context. This leads me to reduce some of my probability. Some concerns that apply to a more formidable Eurisko but which may not apply by default to near-term AI systems:
* Alien values
* Maximalist desire for world domination
* Convergence to a utility function
* Very competent strategizing, of the “treacherous turn” variety
* Self-improvement
* etc.
**Uneasiness about in-the-limit reasoning**
One particular form of argument, or chain of reasoning, goes like:
1. An arbitrarily intelligent/capable/powerful process would be of great danger to humanity. This [implies](https://en.wikipedia.org/wiki/Intermediate_value_theorem) that there is some point, either at arbitrary intelligence or before it, such that a very intelligent process would start to be and then definitely be a great danger to humanity.
2. If the field of artificial intelligence continues improving, eventually we will get processes that are first as intelligent/capable/powerful as a single human mind, and then greatly exceed it.
3. This would be dangerous
The thing is, I agree with that chain of reasoning. But I see it as applying in the limit, and I am much more doubtful about it being used to justify specific dangers in the near future. In particular, I think that dangers that may appear in the long-run may manifest in limited and less dangerous form in earlier on.
I see various attempts to give models of AI timelines as approximate. In particular:
* Even if an approach is accurate at predicting when above-human level intelligence/power/capabilities would arise
* This doesnt mean that the dangers of in-the-limit superintelligence would manifest at the same time
**AGI, so what?** 
For a given operationalization of AGI, e.g., good enough to be forecasted on, I think that there is some possibility that we will reach such a level of capabilities, and yet that this will not be very impressive or world-changing, even if it would have looked like magic to previous generations. More specifically, it seems plausible that AI will continue to improve without soon reaching high [shock levels](http://sl4.org/shocklevels.html) which exceed humanitys ability to adapt.
This would be similar to how the industrial revolution was transformative but not _that_ transformative. One possible scenario for this might be a world where we have pretty advanced AI systems, but we have adapted to that, in the same way that we have adapted to electricity, the internet, recommender systems, or social media. Or, in other words, once I concede that AGI could be as transformative as the industrial revolution, I don't have to concede that it would be maximally transformative.
### I dont trust chains of reasoning with imperfect concepts
The concerns in this section, when combined, make me uneasy about chains of reasoning that rely on imperfect concepts. Those chains may be very conjunctive, and they may apply to the behaviour of an in-the-limit-superintelligent system, but they may not be as action-guiding for systems in our near to medium term future.
For an example of the type of problem that I am worried about, but in a different domain, consider Georgism, the idea of deriving all government revenues from a land value tax. From a recent [blogpost](https://daviddfriedman.blogspot.com/2023/01/a-problem-with-georgism.html) by David Friedman: “since it is taxing something in perfectly inelastic supply, taxing it does not lead to any inefficient economic decisions. The site value does not depend on any decisions made by its owner, so a tax on it does not distort his decisions, unlike a tax on income or produced goods.”
Now, this reasoning appears to be sound. Many people have been persuaded by it. However, because the concepts are imperfect, there can still be flaws. One possible flaw might be that the land value would have to be measured, and that inefficiency might come from there. Another possible flaw was recently pointed out by David Friedman in the [blogpost](https://daviddfriedman.blogspot.com/2023/01/a-problem-with-georgism.html) linked above, which I understand as follows: the land value tax rewards counterfactual improvement, and this leads to predictable inefficiencies because you want to be rewarding Shapley value instead, which is much more difficult to estimate.
I think that these issues are fairly severe when attempting to make predictions for events further in the horizon, e.g., ten, thirty years. The concepts shift like sand under your feet.
## Uneasiness about selection effects at the level of arguments
I am uneasy about what I see as selection effects at the level of arguments. I think that there is a small but intelligent community of people who have spent significant time producing some convincing arguments about AGI, but no community which has spent the same _amount of effort_ looking for arguments against.
[Here](https://philiptrammell.com/blog/46) is a neat blogpost by Phil Trammel on this topic.
Here are some excerpts from a casual discussion among [Samotsvety Forecasting](https://samotsvety.org/) team members:
> The selection effect story seems pretty broadly applicable to me. I'd guess most Christian apologists, Libertarians, Marxists, etc. etc. etc. have a genuine sense of dialectical superiority: "All of these common objections are rebutted in our FAQ, yet our opponents aren't even aware of these devastating objections to their position", etc. etc.
>
> You could throw in bias in evaluation too, but straightforward selection would give this impression even to the fair-minded who happen to end up in this corner of idea space. There are many more 'full time' (e.g.) Christian apologists than anti-apologists, so the balance of argumentative resources (and so apparent balance of reason) will often look slanted.
>
> This doesn't mean the view in question is wrong: back in my misspent youth there were similar resources re, arguing for evolution vs. creationists/ID ([https://www.talkorigins.org](https://www.talkorigins.org)/). But it does strongly undercut "but actually looking at the merits clearly favours my team" alone as this isn't truth tracking (more relevant would be 'cognitive epidemiology' steers: more informed people tend to gravitate to one side or another, proponents/opponents appear more epistemically able, etc.)
>
> ---
>
> An example for me is Christian theology. In particular, consider Aquinas' five proofs of good (summarized in [Wikipedia](<https://en.wikipedia.org/wiki/Five_Ways_(Aquinas)#The_Five_Ways>)), or the various [ontological arguments](https://en.wikipedia.org/wiki/Ontological_argument). Back in the day, in took me a bit to a) understand what exactly they are saying, and b) understand why they don't go through. The five ways in particular were written to reassure Dominican priests who might be doubting, and in their time they did work for that purpose, because the topic is complex and hard to grasp.
>
> ---
>
> You should be worried about the 'Christian apologist' (or philosophy of religion, etc.) selection effect when those likely to discuss the view are selected for sympathy for it. Concretely, if on acquaintance with the case for AI risk your reflex is 'that's BS, there's no way this is more than 1/million', you probably aren't going to spend lots of time being a dissident in this 'field' versus going off to do something else.
>
> This gets more worrying the more generally epistemically virtuous folks are 'bouncing off': e.g. neuroscientists who think relevant capabilities are beyond the ken of 'just add moar layers', ML Engineers who think progress in the field is more plodding than extraordinary, policy folks who think it will be basically safe by default etc. The point is this distorts the apparent balance of reason - maybe this is like Marxism, or NGDP targetting, or Georgism, or general semantics, perhaps many of which we will recognise were off on the wrong track.
>
> (Or, if you prefer being strictly object-level, it means the strongest case for scepticism is unlikely to be promulgated. If you could pin folks bouncing off down to explain their scepticism, their arguments probably won't be that strong/have good rebuttals from the AI risk crowd. But if you could force them to spend years working on their arguments, maybe their case would be much more competitive with proponent SOTA).
>
> ---
>
> It is general in the sense there is a spectrum from (e.g.) evolutionary biology to (e.g.) Timecube theory, but AI risk is somewhere in the range where it is a significant consideration.
>
> It obviously isn't an infallible one: it would apply to early stage contrarian scientific theories and doesn't track whether or not they are ultimately vindicated. You rightly anticipated the base-rate-y reply I would make.
>
> Garfinkel and Shah still think AI is a very big deal, and identifying them at the sceptical end indicates how far afield from 'elite common sense' (or similar) AI risk discussion is. Likewise I doubt that there are some incentives to by a dissident from this consensus means there isn't a general trend in selection for those more intuitively predisposed to AI concern.
There are some possible counterpoints to this, and other Samotsvety Forecasting team members made those, and thats fine. But my individual impression is that the selection effects argument packs a whole lot of punch behind it.
One particular dynamic that Ive seen some gung-ho AI risk people mention is that (paraphrasing): “New people each have their own unique snowflake reasons for rejecting their particular theory of how AI doom will develop. So I can convince each particular person, but only by talking to them individually about their objections.”
So, in illustration, the overall balance could look something like:
<img src="https://i.imgur.com/ziJqSn9.png" class='.img-medium-center'>
Whereas the individual matchup could look something like:
<img src="https://i.imgur.com/thdBH3n.png" class='.img-medium-center'>
And so you would expect the natural belief dynamics stemming from that type of matchup. 
What you would want to do is to have all the evidence for and against, and then weigh it. 
I also think that there are selection effects around which evidence surfaces on each side, rather than only around which arguments people start out with.
It is interesting that when people move to the Bay area, this is often very “helpful” for them in terms of updating towards higher AI risk. I think that this is a sign that a bunch of social fuckery is going on. In particular, I think it might be the case that Bay area movement leaders identify arguments for shorter timelines and higher probability of x-risk with “the rational”, which produces strong social incentives to be persuaded and to come up with arguments in one direction.
More specifically, I think that “if I isolate people from their normal context, they are more likely to agree with my idiosyncratic beliefs” is a mechanisms that works for many types of beliefs, not just true ones. And more generally, I think that “AI doom is near” and associated beliefs are a memeplex, and I am inclined to discount their specifics.
## Miscellanea
### Difference between in-argument reasoning and all-things-considered reasoning
Id also tend to differentiate between the probability that an argument or a model gives, and the all-things considered probability. For example, I might look at Ajeyas timeline, and I might generate a probability by inputting my curves in its model. But then I would probably add additional uncertainty on top of that model.
My weak impression is that some of the most gung-ho people do not do this.
### Methodological uncertainty
Its unclear whether we can get good accuracy predicting dynamics that may happen across decades. I might be inclined to discount further based on that. One particular uncertainty that I worry about is that we can get “AI will be a big deal and be dangerous”,  but that danger taking a different shape than what we expected.
For this reason, I am more sympathetic to tools other than forecasting for long-term decision-making, e.g., as outlined [here](https://forum.effectivealtruism.org/posts/wyHjpcCxuqFzzRgtX/a-practical-guide-to-long-term-planning-and-suggestions-for). 
### Uncertainty about unknown unknowns
I think that unknown unknowns mostly delay AGI. E.g., covid, nuclear war, and many other things could lead to supply chain disruptions. There are unknown unknowns in the other direction, but the higher one's probability goes, the more unknown unknowns should shift one towards 50%.
### Updating on virtue
I think that _updating on virtue_ is a legitimate move. By this I mean to notice how morally or epistemically virtuous someone is, to update based on that about whether their arguments are made in good faith or from a desire to control, and to assign them more or less weight accordingly.
I think that a bunch of people around the CFAR cluster that I was exposed to weren't particularly virtuous and willing to go to great lengths to convince people that AI is important. In particular, I think that isolating people from the normal flow of their lives for extended periods has an unreasonable effectiveness at making them more pliable and receptive to new and weird ideas, whether they are right or wrong. I am a bit freaked out about the extent to which [ESPR](https://espr-camp.org/), a rationality camp for kids in which I participated, did that.
(Brief aside: An ESPR instructor points out that ESPR separated itself from CFAR after 2019, and has been trying to mitigate these factors. I do think that the difference is important, but this post isn't about ESPR in particular but about AI doom skepticism and so will not be taking particular care here.)
Here is a comment from a CFAR cofounder, which has since left the organization, taken from [this](https://www.facebook.com/morphenius/videos/10158119720662635/?comment_id=10158120282677635&reply_comment_id=10158120681527635) Facebook comment thread (paragraph divisions added by me):
> **Question by bystander**: Around 3 minutes, you mention that looking back, you don't think CFAR's real drive was \_actually\_ making people think better. Would be curious to hear you elaborate on what you think the real drive was.
>
> **Answer**: I'm not going to go into it a ton here. It'll take a bit for me to articulate it in a way that really lands as true to me. But a clear-to-me piece is, CFAR always fetishized the end of the world. It had more to do with injecting people with that narrative and propping itself up as important. 
>
> We did a lot of moral worrying about what "better thinking" even means and whether we're helping our participants do that, and we tried to fulfill our moral duty by collecting information that was kind of related to that, but that information and worrying could never meaningfully touch questions like "Are these workshops worth doing at all?" We would ASK those questions periodically, but they had zero impact on CFAR's overall strategy. 
>
> The actual drive in the background was a lot more like "Keep running workshops that wow people" with an additional (usually consciously (!) hidden) thread about luring people into being scared about AI risk in a very particular way and possibly recruiting them to MIRI-type projects. 
>
> Even from the very beginning CFAR simply COULD NOT be honest about what it was doing or bring anything like a collaborative tone to its participants. We would infantilize them by deciding what they needed to hear and practice basically without talking to them about it or knowing hardly anything about their lives or inner struggles, and we'd organize the workshop and lectures to suppress their inclination to notice this and object. 
>
> That has nothing to do with grounding people in their inner knowing; it's exactly the opposite. But it's a great tool for feeling important and getting validation and coercing manipulable people into donating time and money to a Worthy Cause™ we'd specified ahead of time. Because we're the rational ones, right? 😛 
>
> The switch Anna pushed back in 2016 to CFAR being explicitly about xrisk was in fact a shift to more honesty; it just abysmally failed the is/ought distinction in my opinion. And, CFAR still couldn't quite make the leap to full honest transparency even then. ("Rationality for its own sake for the sake of existential risk" is doublespeak gibberish. Philosophical summersaults won't save the fact that the energy behind a statement like that is more about controlling others' impressions than it is about being goddamned honest about what the desire and intention really is.)
The dynamics at ESPR, a rationality camp I was involved with, were at times a bit more dysfunctional than that, particulary before 2019. For that reason, I am inclined to update downwards. I think that this is a personal update, and I dont necessarily expect it to generalize.
I think that some of the same considerations that I have about ESPR might also hold for those who have interacted with people seeking to persuade, e.g., mainline CFAR workshops, 80,000 hours career advising calls, ATLAS, or similar. But to be clear I haven't interacted much with those other groups myself and my sense is that CFAR—which organize iterations of ESPR up to 2019—went off the guardrails but that these other organizations haven't.
### Industry vs AI safety community
Its unclear to me what the views of industry people are. In particular, the question seems a bit confused. I want to get at the independent impression that people get from working with state-of-the-art AI models. But industry people may already be influenced by AI safety community concerns, so its unclear how to isolate the independent impression. Doesnt seem undoable, though. 
## Suggested decompositions
The above reasons for skepticism lead me to suggest the following decompositions for my forecasting group, Samotsvety, to use when forecasting AGI and its risks:
### Very broad decomposition
I: 
* Will AGI be a big deal?
* Conditional on it being “a big deal”, will it lead to problems?
* Will those problems be existential?
II: 
1. AI capabilities will continue advancing
2. The advancement of AI capabilities will lead to social problems
3. … and eventually to a great catastrophe
4. … and eventually to human extinction
### Are we right about this stuff decomposition
1. We are right about this AGI stuff 
2. This AGI stuff implies that AGI will be dangerous
3. … and it will lead to human extinction
### Inside the model/outside the model decomposition
I:
* Model/Decomposition X gives a probability
* Are the concepts in the decomposition robust enough to support chains of inference?
* What is the probability of existential risk if they arent?
I: 
* Model/Decomposition X gives a probability
* Is model X roughly correct?
* Are the concepts in the decomposition robust enough to support chains of inference?
* Will the implicit assumptions that it is making pan out?
* What is the probability of existential risk if model X is not correct?
## Implications of skepticism
I view the above as moving me away from certainty that we will get AGI in the short term. For instance, I think that having 70 or 80%+ probabilities on AI catastrophe within our lifetimes is probably just incorrect, insofar as a probability can be incorrect. 
Anecdotally, I recently met someone at an EA social event that a) was uncalibrated, e.g., on Open Philanthropys [calibration tool](https://www.openphilanthropy.org/calibration), but b) assigned 96% to AGI doom by 2070. Pretty wild stuff.
Ultimately, Im personally somewhere around 40% for "By 2070, will it become possible and financially feasible to build [advanced-power-seeking](https://arxiv.org/abs/2206.13353) AI systems?", and somewhere around 10% for doom. I dont think that the difference matters all that much for practical purposes, but:
1. I am marginally more concerned about unknown unknowns and other non-AI risks
2. I would view interventions that increase civilizational robustness (e.g., bunkers) more favourably, because these are a bit more robust to unknown risks and could protect against a wider range or risks
3. I dont view AGI soon as particularly likely
4. I view a stance which “aims to safeguard humanity through the 21st century” as more appealing than “Oh fuck AGI risk”
## Conclusion
Ive tried to outline some factors about why I feel uneasy with high existential risk estimates. I view the most important points as:
1. Distrust of reasoning chains using fuzzy concepts
2. Distrust of selection effects at the level of arguments
3. Distrust of community dynamics
Its not clear to me whether I have bound myself into a situation in which I cant update from other peoples object-level arguments. I might well have, and it would lead to me playing in a perhaps-unnecessary hard mode.
If so, I could still update from e.g.:
* Trying to make predictions, and seeing which generators are more predictive of AI progress
* Investigations that I do myself, that lead me to acquire independent impressions, like playing with state-of-the-art models
* Deferring to people that I trust independently, e.g., Gwern
Lastly, I would loathe it if the same selection effects applied to this document: If I spent a few days putting this document together, it seems easy for the AI safety community to easily put a few cumulative weeks into arguing against this document, just by virtue of being a community.
This is all.
# Acknowledgements
<img src="https://i.imgur.com/hv8GEDS.jpg" class='img-frontpage-center'>
I am grateful to the [Samotsvety](https://samotsvety.org/) forecasters that have discussed this topic with me, and to Ozzie Gooen for comments and review. The above post doesn't necessarily represent the views of other people at the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/), which nonetheless supports my research.
<p><section id='isso-thread'>
<noscript>Javascript needs to be activated to view comments.</noscript>
</section></p>

View File

@ -0,0 +1,104 @@
An in-progress experiment to test how Laplaces rule of succession performs in practice.
==============
_Note_: Of reduced interest to generalist audiences.
## Summary
I compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by [Laplaces rule](https://en.wikipedia.org/wiki/Rule_of_succession)—which only depends on the number of years passed since a conjecture was created—are about right.
In a few years, I think this will shed some light on whether Laplaces rule of succession is useful in practice. For people wanting answers more quickly, I also outline some further work which could be done to obtain results now. 
The dataset Im using can be seen [here](https://docs.google.com/spreadsheets/d/1qT01YdTgdUzvOJU6apCBNsUCQjGN6tvwFpHyehfEkc8/edit?usp=sharing) ([a](https://archive.org/details/math-laplace)). 
## Probability that a conjecture will be resolved by a given year according to Laplaces law.
I estimate the probability that a randomly chosen conjecture will be solved as follows:
<img src="https://i.imgur.com/M9jfZva.jpg" class='.img-medium-center'>
That is, the probability that the conjecture will first be solved in the year _n_ is the probability given by Laplace conditional on it not having been solved any year before.
For reference, a “pseudo-count” corresponds to either changing the numerator to an integer higher than one, or to making _n_ higher. This can be used to capture some of the structure that a problem manifests. E.g., if we dont think that the prior probability of a theorem being solved in the first year is around 50%, this can be addressed by adding pseudo-counts. 
Code to do these operations in the programming language R can be found [here](https://gist.github.com/NunoSempere/45b16dcb6c9e240a698beb001cb1f266). A dataset that includes these probabilities can be seen [here](https://gist.github.com/NunoSempere/45b16dcb6c9e240a698beb001cb1f266https://gist.github.com/NunoSempere/45b16dcb6c9e240a698beb001cb1f266).
## Expected distribution of the number of resolved conjectures according to Laplaces rule of succession
Using the above probabilities, we can, through sampling, estimate the number of conjectures in our database that will be solved in the next 3, 5, or 10 years. The code to do this is in the same R file linked a paragraph ago.
### For three years
**<img src="https://i.imgur.com/0kK1I9Y.png" class='.img-medium-center'>**
If we calculate the 90% and the 98% confidence intervals, these are respectively (6 to 16) and (4 to 18) problems solved in the next three years.
### For five years
**<img src="https://i.imgur.com/K4ES5A9.png" class='.img-medium-center'>**
If we calculate the 90% and the 98% confidence intervals, these are respectively (11 to 24) and (9 to 27) problems solved in the next five years.
### For ten years
**<img src="https://i.imgur.com/ZiFtIP5.png" class='.img-medium-center'>**
If we calculate the 90% and the 98% confidence intervals, these are respectively (23 to 40) and (20 to 43) problems solved in the next five years.
## Ideas for further work
### Do this experiment for other topics besides mathematical theorems and for other methods besides Laplaces law
Although I expect that this experiment restricted to mathematical experiments will already be decently informative, it would also be interesting to look at the performance of Laplaces law for a range of topics.
It might also be worth it to look at other approaches. In particular, Id be interested in seeing the same experiment but for “[semi-informative priors](https://www.openphilanthropy.org/research/semi-informative-priors-over-ai-timelines/)”—there is no particular reasons why that approach only has to apply to super-speculative areas like AI. So an experiment could look at experts trying to come up with semi-informative priors for events that are testable in the next few years, and this might shed some light into the general method.
### Checking whether the predictions from Laplaces law come true
In three, five, and ten years Ill check the number of conjectures which have been resolved. If that falls outside the 99% confidence interval, I will become more skeptical of using Laplaces law for arbitrary domains. Ill then investigate whether Laplaces law could be rescued in some way, e.g., by using its time-invariant [version](https://www.lesswrong.com/posts/wE7SK8w8AixqknArs/a-time-invariant-version-of-laplace-s-rule), by adding some pseudo-counts, or through some other method.
With pseudo-counts, the idea would be that there would be a number of unique pseudo-counts which would make Laplace output the correct probability in three years. Then the question would be whether that number of pseudo-counts is enough to make good predictions about the five- and ten-year periods.
### Comparison against prediction markets
Id also be curious about posting these conjectures to [Manifold Markets](https://manifold.markets/) or [Metaculus](https://www.metaculus.com/) and seeing if these platforms can outperform Laplaces law.
### Using an older version of the Wikipedia entry to come up with answers now
If someone was interested in resolving the question sooner without having to wait, one could redo this investigation but:
1. Look at the [List of unsolved problems in mathematics from 2015](https://en.wikipedia.org/w/index.php?title=List_of_unsolved_problems_in_mathematics&oldid=647366405),
2. checking to see how many have been resolved since then
3. check whether adding pseudo-counts to fit the number of theorems solved by 2018 is predictive of how many problems have been solved by 2022
The reason why I didnt do this myself is that step 2. would be fairly time intensive, and I was pretty fed up after creating the dataset as outlined in the appendix.
## Acknowledgements
**<img src="https://i.imgur.com/WUqgilk.png" class='img-frontpage-center'>**
This is a project of the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/) (QURI). Thanks to Ozzie Gooen and Nics Olayres for giving comments and suggestions. 
PS: You can subscribe to QURI's posts [here](https://quri.substack.com/), or to my own blog posts [here](https://nunosempere.com/.subscribe/).
## Appendix: Notes on the creation of the dataset
I went through the list of conjectures on this Wikipedia page: [List of unsolved problems in mathematics](https://en.wikipedia.org/w/index.php?title=List_of_unsolved_problems_in_mathematics&oldid=1119489813) ([a](https://web.archive.org/web/20221107124932/https://en.wikipedia.org/w/index.php?title=List_of_unsolved_problems_in_mathematics&oldid=1119489813)) and filtered them as follows:
1. Ignored additional lists
2. Ignore conjectures which have already been solved
3. In cases where the date in which the conjecture was made, approximate it
4. Ignore theorems without a full Wikipedia page or a large enough section in another page, as this would have made tracking down the date or tracking down resolution too difficult
5. Ignoring the “Combinatorial games” section, as well as the many prime conjectures, as most were too short
6. For problems that are equivalent, take the oldest formulation
7. I sometimes ignored the case where a conjecture had been proven, and this led to even more conjectures
Note that if it turns out that a given conjecture was already solved by November 2022, it should be excluded from the dataset, rather than counted as a positive
<p>
<section id='isso-thread'>
<noscript>Javascript needs to be activated to view commen
ts.</noscript>
</section>
</p>

View File

@ -0,0 +1,26 @@
<h1>EA no longer an expanding empire.</h1>
<p>PS: Please don&rsquo;t share this newsletter.</p>
<p>In early 2022, the Effective Altruism movement was triumphant. Sam Bankman-Fried was very utilitarian and very cool, and there was such a wealth of funding that the bottleneck was capable people to implement projects. If you had been in the effective altruism community for a while, it was relatively easy to acquire funding. New organizations popped up like mushrooms.</p>
<p>Now the situation looks different. Samo Burja has this interesting book on <a href="https://samoburja.com/wp-content/uploads/2020/11/Great_Founder_Theory_by_Samo_Burja_2020_Manuscript.pdf">Great Founder Theory</a>, from which I&rsquo;ve gotten the notion of an &ldquo;expanding empire&rdquo;. In an expanding empire, like a startup, there are new opportunities and land to conquer, and members can be rewarded with parts of the newly conquered land. The optimal strategy here is <em>unity</em> in the face of adversity. EA in 2022 was just that, a united social movement playing together against the cruelty of nature and history.</p>
<p>My sense is that the tendency for EA in 2023 and going forward will be less like that. With funding drying, EA will now have to economize and prioritize between different causes. Funding is now more limited, not only because the SBF empire collapsed, but also because the stock market collapsed, which means that Open Philanthropy—now the main funder once again—also has less money. And with economizing in the background, internecine fights become more worth it, because the EA movement isn&rsquo;t trying to grow the pie together, but rather each part will be trying to defend its share of the pie. Fewer offices all over the place, fewer regrantors to fund moonshots. More frugality. You get the idea.</p>
<p>But note that the EA community is mostly made out of very nice people trying their best to do good, so I expect that the above paragraphs will just describe a directional difference, rather than an absolute level.</p>
<p>Personally, some steps to consider might be:</p>
<ul>
<li>Looking for other communities.
<ul>
<li>I&rsquo;ve personally retreated a bit into forecasting and linux programming.</li>
<li>And I&rsquo;ve added comments and a <a href="https://nunosempere.com/.subscribe/">subscription option</a> to my <a href="https://nunosempere.com/blog/">blog</a>, to be a bit less dependent on the EA forum.</li>
</ul>
</li>
<li>Explictly expecting less funding, fewer EA™ jobs.</li>
<li>Re-evaluate earning to give.</li>
</ul>

View File

@ -0,0 +1,21 @@
EA no longer an expanding empire.
=================================
PS: Please don't share this newsletter.
In early 2022, the Effective Altruism movement was triumphant. Sam Bankman-Fried was very utilitarian and very cool, and there was such a wealth of funding that the bottleneck was capable people to implement projects. If you had been in the effective altruism community for a while, it was relatively easy to acquire funding. New organizations popped up like mushrooms.
Now the situation looks different. Samo Burja has this interesting book on [Great Founder Theory](https://samoburja.com/wp-content/uploads/2020/11/Great_Founder_Theory_by_Samo_Burja_2020_Manuscript.pdf), from which I've gotten the notion of an "expanding empire". In an expanding empire, like a startup, there are new opportunities and land to conquer, and members can be rewarded with parts of the newly conquered land. The optimal strategy here is *unity* in the face of adversity. EA in 2022 was just that, a united social movement playing together against the cruelty of nature and history.
My sense is that the tendency for EA in 2023 and going forward will be less like that. With funding drying, EA will now have to economize and prioritize between different causes. Funding is now more limited, not only because the SBF empire collapsed, but also because the stock market collapsed, which means that Open Philanthropy—now the main funder once again—also has less money. And with economizing in the background, internecine fights become more worth it, because the EA movement isn't trying to grow the pie together, but rather each part will be trying to defend its share of the pie. Fewer offices all over the place, fewer regrantors to fund moonshots. More frugality. You get the idea.
But note that the EA community is mostly made out of very nice people trying their best to do good, so I expect that the above paragraphs will just describe a directional difference, rather than an absolute level.
Personally, some steps to consider might be:
- Looking for other communities.
- I've personally retreated a bit into forecasting and linux programming.
- And I've added comments and a [subscription option](https://nunosempere.com/.subscribe/) to my [blog](https://nunosempere.com/blog/), to be a bit less dependent on the EA forum.
- Explictly expecting less funding, fewer EA™ jobs.
- Re-evaluate earning to give.

Binary file not shown.

After

Width:  |  Height:  |  Size: 407 KiB

View File

@ -0,0 +1,41 @@
Effective Altruism No Longer an Expanding Empire.
=================================================
In early 2022, the Effective Altruism movement was triumphant. Sam Bankman-Fried was very utilitarian and very cool, and there was such a wealth of funding that the bottleneck was capable people to implement projects. If you had been in the effective altruism community for a while, it was easier to acquire funding. Around me, I saw new organizations pop up like mushrooms.
Now the situation looks different. Samo Burja has this interesting book on [Great Founder Theory][0] , from which Ive gotten the notion of an “expanding empire”. In an expanding empire, like a startup, there are new opportunities and land to conquer, and members can be rewarded with parts of the newly conquered land. The optimal strategy here is _unity_ . EA in 2022 was just that, a united social movement playing together against the cruelty of nature and history.
![](https://i.imgur.com/3SfMuU1.jpg)
*<br>Imagine the Spanish empire, without the empire.*
My sense is that the tendency for EA in 2023 and going forward will be less like that. Funding is now more limited, not only because the FTX empire collapsed, but also because the stock market collapsed, which means that Open Philanthropy—now the main funder once again—also has less money. With funding drying, EA will now have to economize and prioritize between different causes. And with economizing in the background, internecine fights become more worth it, because the EA movement isnt trying to grow the pie together, but rather each part will be trying to defend its share of the pie. Fewer shared offices will exist all over the place, fewer regrantors to fund moonshots. More frugality. So EA will become more like a bureaucracy and less like a startup. You get the idea.
But note that the EA community is mostly made out of very nice people trying their best to do good, so I expect that the above paragraphs will just describe a directional difference, rather than an absolute level.
Some steps to consider might be:
* Looking for other communities.
* Ive personally retreated a bit into forecasting and linux programming.
* And Ive added comments and a [subscription option][2] to my [blog][3] , to be a bit less dependent on the EA forum, where I was previously posting all my research. Im also [tweeting][4] more.
* Explicitly expecting less funding, fewer EA™ jobs.
* Not depending on the EA community, generally
* Re-evaluate earning to give.
Ive been meaning to write something like the above for a bit. At 1AM, although I think its an a-ok attempt, Ive noticed that I do have a negativity bias, so do take everything I say with a grain of salt.
Thats all I have for now,
Nuño.
[0]: https://samoburja.com/wp-content/uploads/2020/11/Great_Founder_Theory_by_Samo_Burja_2020_Manuscript.pdf
[1]: https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F1cd9e8d6-1f3c-4c29-8105-d089ab17baed_1070x719.png
[2]: https://nunosempere.com/.subscribe/
[3]: https://nunosempere.com/blog/
[4]: https://twitter.com/NunoSempere
<p>
<section id='isso-thread'>
<noscript>Javascript needs to be activated to view comments.</noscript>
</section>
</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 683 KiB

View File

@ -0,0 +1,3 @@
no matter where you stand
=========================
<div style="margin-left: auto; margin-right: auto;">When in a dark night the chamber of guf whispers<br>that I have failed, that I am failing, that I'll fail<br>I become mute, lethargic, frightful and afraid<br>of the pain I'll cause and the pain I'll endure.<br><br>Many were the times that I started but stopped<br>Many were the balls that I juggled and dropped<br>Many the people I discouraged and spooked<br>And the times I did good, I did less than I'd hoped<br><br>And then I remember that measure is unceasing,<br>that if you are a good man, why not a better man?<br>that if a better man, why not a great man?<br><br>and if you are a great man, why not yet a god?<br>And if a god, why not yet a better god?<br>measure is unceasing, no matter where you stand<br></div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 268 KiB

View File

@ -0,0 +1,2 @@
/usr/bin/markdown -f fencedcode -f ext -f footnote -f latex $1

View File

@ -0,0 +1,163 @@
<h1>Just-in-time Bayesianism</h1>
<h2>Summary</h2>
<p>I propose a variant of subjective Bayesianism that I think captures some important aspects of how humans<sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup> reason in practice given that Bayesian inference is normally too computationally expensive. I compare it to some theories in the philosophy of science and briefly mention possible alternatives. In conjuction with Laplace&rsquo;s law, I claim that it might be able to explain some aspects of <a href="https://astralcodexten.substack.com/p/trapped-priors-as-a-basic-problem">trapped priors</a>.</p>
<h2>A motivating problem in subjective Bayesianism</h2>
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<!-- Note: to correctly render this math, compile this markdown with
/usr/bin/markdown -f fencedcode -f ext -f footnote -f latex $1
where /usr/bin/markdown is the discount markdown binary
https://github.com/Orc/discount
http://www.pell.portland.or.us/~orc/Code/discount/
-->
<p>Bayesianism as an epistemology has elegance and parsimony, stemming from its inevitability as formalized by <a href="https://en.wikipedia.org/wiki/Cox's_theorem">Cox&rsquo;s</a> <a href="https://nunosempere.com/blog/2022/08/31/on-cox-s-theorem-and-probabilistic-induction/">theorem</a>. For this reason, it has a certain magnetism as an epistemology.</p>
<p>However, consider the following example: a subjective Bayesian which has only two hypothesis about a coin:</p>
<p>\[
\begin{cases}
\text{it has bias } 2/3\text{ tails }1/3 \text{ heads }\\
\text{it has bias } 1/3\text{ tails }2/3 \text{ heads }
\end{cases}
\]</p>
<p>Now, as that subjective Bayesian observes a sequence of coin tosses, he might end up very confused. For instance, if he only observes tails, he will end up assigning almost all of his probability to the first hypothesis. Or if he observes 50% tails and 50% heads, he will end up assigning equal probability to both hypotheses. But in neither case are his hypotheses a good representation of reality.</p>
<p>Now, this could be fixed by adding more hypotheses, for instance some probability density to each possible bias. This would work for the example of a coin toss, but might not work for more complex real-life examples: representing many hypothesis about the war in Ukraine or about technological progress in their fullness would be too much for humans.<sup id="fnref:2"><a href="#fn:2" rel="footnote">2</a></sup></p>
<p><img src="https://i.imgur.com/vqc48uT.png" alt="" />
<strong>Original subjective Bayesianism</strong></p>
<p>So on the one hand, if our set of hypothesis is too narrow, we risk not incorporating a hypothesis that reflects the real world. But on the other hand, if we try to incorporate too many hypothesis, our mind explodes because it is too tiny. Whatever shall we do?</p>
<h2>Just-in-time Bayesianism by analogy to just-in-time compilation</h2>
<p><a href="https://en.wikipedia.org/wiki/Just-in-time_compilation">Just-in-time compilation</a> refers to a method of executing programs such that their instructions are translated to machine code not at the beginning, but rather as the program is executed.</p>
<p>By analogy, I define just-in-time Bayesianism as a variant of subjective Bayesian where inference is initially performed over a limited number of hypothesis, but if and when these hypothesis fail to be sufficiently predictive of the world, more are searched for and past Bayesian inference is recomputed.</p>
<p><img src="https://i.imgur.com/bptVgcS.png" alt="" />
<strong>Just-in-time Bayesianism</strong></p>
<p>I intuit that this method could be used to run a version of Solomonoff induction that converges to the correct hypothesis that describes a computable phenomenon in a finite (but still enormous) amount of time. More generally, I intuit that just-in-time Bayesianism will have some nice convergence guarantees.</p>
<h2>As this relates to the philosophy of science</h2>
<p>The <a href="https://en.wikipedia.org/wiki/Strong_programme">strong programme</a> in the sociology of science aims to explain science only with reference to the sociological conditionst that bring it about. There are also various accounts of science which aim to faithfully describe how science is actually practiced.</p>
<p>Well, I&rsquo;m more attracted to trying to explain the workings of science with reference to the ideal mechanism from which they fall short. And I think that just-In-Time Bayesianism parsimoniously explains some aspects with reference to:</p>
<ol>
<li>Bayesianism as the optimal/rational procedure for assigning degrees of belief to statements.</li>
<li>necessary patches which result from the lack of infinite computational power.</li>
</ol>
<p>As a result, just-in-time Bayesianism not only does well in the domains in which normal Bayesianism does well:
- It smoothly processes the distinction between background knowledge and new revelatory evidence
- It grasps that both confirmatory and falsificatory evidence are important&mdash;which inductionism/confirmationism and naïve forms of falsificationism both fail at
- It parsimoniously dissolves the problem of induction: one never reaches certainty, and instead accumulates Bayesian evidence.</p>
<p>But it is also able to shed some light in some phenomena where alternative theories of science have traditionally fared better:</p>
<ul>
<li>It interprets the difference between scientific revolutions (where the paradigm changes) and normal science (where the implications of the paradigm are fleshd out) as a result of finite computational power</li>
<li>It does a bit better at explaining the problem of priors, where the priors are just the hypothesis that humanity has had enough computing power to generate.</li>
</ul>
<p>Though it is still not perfect</p>
<ul>
<li>the &ldquo;problem of priors&rdquo; is still not really dissolved to a nice degree of satisfaction.</li>
<li>the step of acquiring more hypotheses is not really explained, and it is also a feature of other philosophies of science, so it&rsquo;s unclear that this is that much of a win for just-in-time Bayesianism.</li>
</ul>
<p>So anyways, in philosophy of science the main advantages that just-in-time Bayesianism has is being able to keep some of the more compelling features of Bayesianism, while at the same time also being able to explain some features that other philosophy of science theories have.</p>
<h2>As it relates to ignoring small probabilities</h2>
<p><a href="https://philpapers.org/archive/KOSTPO-18.pdf">Kosonen 2022</a> explores a setup in which an agent ignores small probabilities of vast value, in the context of trying to deal with the &ldquo;fanaticism&rdquo; of various ethical theories.</p>
<p>Here is my perspective on this dilemma:</p>
<ul>
<li>On the one hand, neglecting small probabilities has the benefit of making expected calculations computationally tractable: if we didn&rsquo;t ignore at least some probabilities, we would never finish these calculations.</li>
<li>But on the other hand, the various available methods for ignoring small probabilities are not robust. For example, they are not going to be robust to situations in which these probabilities shift (see p. 181, &ldquo;The Independence Money Pump&rdquo;, <a href="https://philpapers.org/archive/KOSTPO-18.pdf">Kosonen 2022</a>)
<ul>
<li>For example, one could have been very sure that the Sun orbits the Earth, which could have some theological and moral implications. In fact, one could be so sure that one could assign some very small&mdash;if not infinitesimal&mdash;probability to the Earth orbitting the sun instead. But if one ignores very small probabilities ex-ante, one might not able to update in the face of new evidence.</li>
</ul>
</li>
</ul>
<p>Just-in-time Bayesianism might solve this problem by indeed ignoring small probabilities at the beginning, but expanding the search for hypotheses if current hypotheses aren&rsquo;t very predictive of the world we observe.</p>
<h2>Some other related theories and alternatives.</h2>
<ul>
<li>Non-Bayesian epistemology: e.g., falsificationism, positivism, etc.</li>
<li><a href="https://www.alignmentforum.org/posts/Zi7nmuSmBFbQWgFBa/infra-bayesianism-unwrapped">Infra-Bayesianism</a>, a theory of Bayesianism which, amongst other things, is robust to adversaries filtering evidence</li>
<li><a href="https://intelligence.org/files/LogicalInduction.pdf">Logical induction</a>, which also seems uncomputable on account of considering all hypotheses, but which refines itself in finite time</li>
<li>Predictive processing, in which an agent changes the world so that it conforms to its internal model.</li>
<li>etc.</li>
</ul>
<h2>As this relates to the trapped problem of priors</h2>
<p>In <a href="https://astralcodexten.substack.com/p/trapped-priors-as-a-basic-problem">Trapped Priors As A Basic Problem Of Rationality</a>, Scott Alexander considers the case of a man who was previously unafraid of dogs, and then had a scary experience related to a dog&mdash;for our purposes imagine that they were bitten by a dog.</p>
<p>Just-in-time Bayesianism would explain this as follows.</p>
<ul>
<li>At the beginning, the man had just one hypothesis, which is &ldquo;dogs are fine&rdquo;</li>
<li>The man is bitten by a dog. Society claims that this was a freak accident, but this doesn&rsquo;t explain the man&rsquo;s experiences. So the man starts a search for new hypotheses</li>
<li>After the search, the new hypotheses and their probabilities might be something like:</li>
</ul>
<p>\[
\begin{cases}
\text{Dogs are fine, this was just a freak accident }\\
\text{Society is lying. Dogs are not fine, but rather they bite with a frequency of } \frac{2}{n+2}\text{, where n is the number of total encounters the man has had}
\end{cases}
\]</p>
<p>The second estimate is the estimate produced by <a href="https://en.wikipedia.org/wiki/Rule_of_succession">Laplace&rsquo;s law</a>&mdash;an instance of Bayesian reasoning given an ignorance prior&mdash;given one &ldquo;success&rdquo; (a dog biting a human) and \(n\) &ldquo;failures&rdquo; (a dog not biting a human).</p>
<p>Now, because the first hypothesis assigns very low probability to what the man has experienced, most of the probability goes to the second hypothesis.</p>
<p>But now, with more and more encounters, the probability assigned by the second hypothesis, will be as \(\frac{2}{n+2}\), where \(n\) is the number of times the man interacts with a dog. But this goes down very slowly:</p>
<p><img src="https://imgur.com/nIbnexh.png" alt="" /></p>
<p>In particular, you need to experience around as many interactions as you previously have without a dog for \(p(n) =\frac{2}{n+2}\) to halve. But note that this in expectation produces another dog bite! Hence the trapped priors.</p>
<h2>Conclusion</h2>
<p>In conclusion, I sketched a simple variation of subjective Bayesianism that is able to deal with limited computing power. I find that it sheds some clarity in various fields, and considered cases in the philosophy of science, discounting small probabilities in moral philosophy, and the applied rationality community.</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
I think that the model has more explanatory power when applied to groups of humans that can collectively<a href="#fnref:1" rev="footnote">&#8617;</a></li>
<li id="fn:2">
In the limit, we would arrive at Solomonoff induction, a model of perfect inductive inference that assigns a probability to all computable hypothesis. <a href="http://www.vetta.org/documents/legg-1996-solomonoff-induction.pdf">Here</a> is an explanation of Solomonoff induction<sup id="fnref:3"><a href="#fn:3" rel="footnote">3</a></sup>.<a href="#fnref:2" rev="footnote">&#8617;</a></li>
<li id="fn:3">
The author appears to be the <a href="https://en.wikipedia.org/wiki/Shane_Legg">cofounder of DeepMind</a>.<a href="#fnref:3" rev="footnote">&#8617;</a></li>
</ol>
</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -0,0 +1,31 @@
library(ggplot2)
library(ggthemes)
l = list()
l$n = c(1:100)
l$p = 2/(l$n + 2)
l <- as.data.frame(l)
title_text = "Probability assigned by Laplace's rule of succession\nas the number of trials increases"
label_x_axis = "number of trials"
label_y_axis = "probability"
ggplot(data=l, aes(x=n, y=p))+
geom_point(size = 0.5, color="navyblue")+
labs(
title=title_text,
subtitle=element_blank(),
x=label_x_axis,
y=label_y_axis
) +
theme_tufte() +
theme(
legend.title = element_blank(),
plot.title = element_text(hjust = 0.5),
plot.subtitle = element_text(hjust = 0.5),
legend.position="bottom",
legend.box="vertical",
axis.text.x=element_text(angle=60, hjust=1),
plot.background=element_rect(fill = "white",colour = NA)
)

View File

@ -0,0 +1,137 @@
Just-in-time Bayesianism
========================
I propose a simple variant of subjective Bayesianism that I think captures some important aspects of how humans[^1] reason in practice given that Bayesian inference is normally too computationally expensive. I apply it to the problem of [trapped priors](https://astralcodexten.substack.com/p/trapped-priors-as-a-basic-problem), to discounting small probabilities, and mention how it relates to other theories in the philosophy of science.
### A motivating problem in subjective Bayesianism
Bayesianism as an epistemology has elegance and parsimony, stemming from its inevitability as formalized by [Cox's](https://en.wikipedia.org/wiki/Cox's_theorem) [theorem](https://nunosempere.com/blog/2022/08/31/on-cox-s-theorem-and-probabilistic-induction/). For this reason, it has a certain magnetism as an epistemology.
However, consider the following example: a subjective Bayesian which has only two hypothesis about a coin:
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<!-- Note: to correctly render this math, compile this markdown with
/usr/bin/markdown -f fencedcode -f ext -f footnote -f latex $1
where /usr/bin/markdown is the discount markdown binary
https://github.com/Orc/discount
http://www.pell.portland.or.us/~orc/Code/discount/
-->
\[
\begin{cases}
\text{it has bias } 2/3\text{ tails }1/3 \text{ heads }\\
\text{it has bias } 1/3\text{ tails }2/3 \text{ heads }
\end{cases}
\]
Now, as that subjective Bayesian observes a sequence of coin tosses, he might end up very confused. For instance, if he only observes tails, he will end up assigning almost all of his probability to the first hypothesis. Or if he observes 50% tails and 50% heads, he will end up assigning equal probability to both hypotheses. But in neither case are his hypotheses a good representation of reality.
Now, this could be fixed by adding more hypotheses, for instance some probability density to each possible bias. This would work for the example of a coin toss, but might not work for more complex real-life examples: representing many hypothesis about the war in Ukraine or about technological progress in their fullness would be too much for humans.[^2]
<img src="https://i.imgur.com/vqc48uT.png" alt="pictorial depiction of the Bayesian algorithm" style="display: block; margin-left: auto; margin-right: auto; width: 30%;" >
So on the one hand, if our set of hypothesis is too narrow, we risk not incorporating a hypothesis that reflects the real world. But on the other hand, if we try to incorporate too many hypothesis, our mind explodes because it is too tiny. Whatever shall we do?
### Just-in-time Bayesianism by analogy to just-in-time compilation
[Just-in-time compilation](https://en.wikipedia.org/wiki/Just-in-time_compilation) refers to a method of executing programs such that their instructions are translated to machine code not at the beginning, but rather as the program is executed.
By analogy, I define just-in-time Bayesianism as a variant of subjective Bayesian where inference is initially performed over a limited number of hypothesis, but if and when these hypothesis fail to be sufficiently predictive of the world, more are searched for and past Bayesian inference is recomputed. This would look as follows:
<img src="https://i.imgur.com/CwLA5EG.png" alt="pictorial depiction of the JIT Bayesian algorithm" style="display: block; margin-left: auto; margin-right: auto; width: 50%;" >
I intuit that this method could be used to run a version of Solomonoff induction that converges to the correct hypothesis that describes a computable phenomenon in a finite (but still enormous) amount of time. More generally, I intuit that just-in-time Bayesianism will have some nice convergence guarantees.
### As this relates to...
#### ignoring small probabilities
[Kosonen 2022](https://philpapers.org/archive/KOSTPO-18.pdf) explores a setup in which an agent ignores small probabilities of vast value, in the context of trying to deal with the "fanaticism" of various ethical theories.
Here is my perspective on this dilemma:
- On the one hand, neglecting small probabilities has the benefit of making expected calculations computationally tractable: if we didn't ignore at least some possibilities, we would never finish these calculations.
- But on the other hand, the various available methods for ignoring small probabilities are not robust. For example, they are not going to be robust to situations in which these probabilities shift (see p. 181, "The Independence Money Pump", [Kosonen 2022](https://philpapers.org/archive/KOSTPO-18.pdf)).
- For example, one could have been very sure that the Sun orbits the Earth---which could have some theological and moral implications---and thus initially assign a ver small probability to the reverse. But if one ignores very small probabilities ex-ante, one might not able to update in the face of new evidence.
- Similarly, one could have assigned very small probability to a world war. But if one initialy discarded this probability completely, one would not be able to update in the face of new evidence as war approaches.
Just-in-time Bayesianism might solve this problem by indeed ignoring small probabilities at the beginning, but expanding the search for hypotheses if current hypotheses aren't very predictive of the world we observe. In particular, if the chance of a possibility rises continuously before it happens, just-in-time Bayesianism might have some time to deal with new unexpected possibilities.
#### ...the problem of trapped priors
In [Trapped Priors As A Basic Problem Of Rationality](https://astralcodexten.substack.com/p/trapped-priors-as-a-basic-problem), Scott Alexander considers the case of a man who was previously unafraid of dogs, and then had a scary experience related to a dog---for our purposes imagine that they were bitten by a dog.
Just-in-time Bayesianism would explain this as follows.
- At the beginning, the man had just one hypothesis, which is "dogs are fine"
- The man is bitten by a dog. Society claims that this was a freak accident, but this doesn't explain the man's experiences. So the man starts a search for new hypotheses
- After the search, the new hypotheses and their probabilities might be something like:
\[
\begin{cases}
\text{Dogs are fine, this was just a freak accident }\\
\text{Society is lying. Dogs are not fine, but rather they bite with a frequency of } \frac{2}{n+2}\text{, where n is the number of total encounters the man has had}
\end{cases}
\]
The second estimate is the estimate produced by [Laplace's law](https://en.wikipedia.org/wiki/Rule_of_succession)---an instance of Bayesian reasoning given an ignorance prior---given one "success" (a dog biting a human) and \(n\) "failures" (a dog not biting a human).
Now, because the first hypothesis assigns very low probability to what the man has experienced, a whole bunch of the probability goes to the second hypothesis. Note that the prior degree of credence to assign to this second hypothesis *isn't* governed by Bayes' law, and so one can't do a straightforward Bayesian update.
But now, with more and more encounters, the probability assigned by the second hypothesis, will be as \(\frac{2}{n+2}\), where \(n\) is the number of times the man interacts with a dog. But this goes down very slowly:
![](https://i.imgur.com/UntdNrR.png)
In particular, you need to experience around as many interactions as you previously have without a dog for \(p(n) =\frac{2}{n+2}\) to halve. But note that this in expectation approximately produces another dog bite! Hence the optimal move might be to avoid encountering new evidence (because the chance of another dog bite is now too large), hence the trapped priors.
#### ...philosophy of science
The [strong programme](https://en.wikipedia.org/wiki/Strong_programme) in the sociology of science aims to explain science only with reference to the sociological conditionst that bring it about. There are also various accounts of science which aim to faithfully describe how science is actually practiced.
Well, I'm more attracted to trying to explain the workings of science with reference to the ideal mechanism from which they fall short. And I think that just-in-time Bayesianism parsimoniously explains some aspects with reference to:
1. Bayesianism as the optimal/rational procedure for assigning degrees of belief to statements.
2. necessary patches which result from the lack of infinite computational power.
As a result, just-in-time Bayesianism not only does well in the domains in which normal Bayesianism does well:
- It smoothly processes the distinction between background knowledge and new revelatory evidence
- It grasps that both confirmatory and falsificatory evidence are important---which inductionism/confirmationism and naïve forms of falsificationism both fail at
- It parsimoniously dissolves the problem of induction: one never reaches certainty, and instead accumulates Bayesian evidence.
But it is also able to shed some light in some phenomena where alternative theories of science have traditionally fared better:
- It interprets the difference between scientific revolutions (where the paradigm changes) and normal science (where the implications of the paradigm are fleshd out) as a result of finite computational power
- It does a bit better at explaining the problem of priors, where the priors are just the hypothesis that humanity has had enough computing power to generate.
Though it is still not perfect
- the "problem of priors" is still not really dissolved to a nice degree of satisfaction.
- the step of acquiring more hypotheses is not really explained, and it is also a feature of other philosophies of science, so it's unclear that this is that much of a win for just-in-time Bayesianism.
So anyways, in philosophy of science the main advantages that just-in-time Bayesianism has is being able to keep some of the more compelling features of Bayesianism, while at the same time also being able to explain some features that other philosophy of science theories have.
### Some other related theories and alternatives.
- Non-Bayesian epistemology: e.g., falsificationism, positivism, etc.
- [Infra-Bayesianism](https://www.alignmentforum.org/posts/Zi7nmuSmBFbQWgFBa/infra-bayesianism-unwrapped), a theory of Bayesianism which, amongst other things, is robust to adversaries filtering evidence
- [Logical induction](https://intelligence.org/files/LogicalInduction.pdf), which also seems uncomputable on account of considering all hypotheses, but which refines itself in finite time
- Predictive processing, in which an agent changes the world so that it conforms to its internal model.
- etc.
### Conclusion
In conclusion, I sketched a simple variation of subjective Bayesianism that is able to deal with limited computing power. I find that it sheds some clarity in various fields, and considered cases in the philosophy of science, discounting small probabilities in moral philosophy, and the applied rationality community.
[^1]: I think that the model has more explanatory power when applied to groups of humans that can collectively reason.
[^2]: In the limit, we would arrive at Solomonoff induction, a model of perfect inductive inference that assigns a probability to all computable hypothesis. [Here](http://www.vetta.org/documents/legg-1996-solomonoff-induction.pdf) is an explanation of Solomonoff induction[^3].
[^3]: The author appears to be the [cofounder of DeepMind](https://en.wikipedia.org/wiki/Shane_Legg).
<p>
<section id='isso-thread'>
<noscript>Javascript needs to be activated to view comments.</noscript>
</section>
</p>

View File

@ -1,12 +1,11 @@
## Forecasting
To do: add references to Samotsvety Forecasting, Quantified Uncertainty tooling.
My forecasting group is known as Samotsvety. You can read more about it [here](https://samotsvety.org/).
### Newsletter
I am perhaps most well-known for my monthly forecasting _newsletter_. It can be found both [on Substack](https://forecasting.substack.com/) and [on the EA Forum](https://forum.effectivealtruism.org/s/HXtZvHqsKwtAYP6Y7). Besides its mothly issues, I've also written:
- (active) [We are giving $10k as forecasting micro-grants](https://forum.effectivealtruism.org/posts/oqFa8obfyEmvD79Jn/we-are-giving-usd10k-as-forecasting-micro-grants)
- [Looking back at 2021](https://forecasting.substack.com/p/looking-back-at-2021)
- [Forecasting Postmortem: The Fall of Kabul](https://forecasting.substack.com/p/postmortem-the-fall-of-kabul) (paywalled)
- [2020: Forecasting in Review](https://forecasting.substack.com/p/2020-forecasting-in-review)
@ -21,6 +20,7 @@ As part of my research at the [Quantified Uncertainty Research Institute](https:
- [Amplifying generalist research via forecasting models of impact and challenges](https://forum.effectivealtruism.org/posts/ZCZZvhYbsKCRRDTct/part-1-amplifying-generalist-research-via-forecasting-models) and [part 2](https://forum.effectivealtruism.org/posts/ZTXKHayPexA6uSZqE/part-2-amplifying-generalist-research-via-forecasting).
- [Real-Life Examples of Prediction Systems Interfering with the Real World](https://www.lesswrong.com/posts/6bSjRezJDxR2omHKE/real-life-examples-of-prediction-systems-interfering-with)
- [Introducing Metaforecast: A Forecast Aggregator and Search Tool](https://forum.effectivealtruism.org/posts/tEo5oXeSNcB3sYr8m/introducing-metaforecast-a-forecast-aggregator-and-search)
- [Introduction to Fermi estimates](https://nunosempere.com/blog/2022/08/20/fermi-introduction/)
I also have a few _minor pieces_:
@ -39,6 +39,15 @@ I also mantain [this database](https://docs.google.com/spreadsheets/d/1XB1GHfizN
### Funding
- (active) [We are giving $10k as forecasting micro-grants](https://forum.effectivealtruism.org/posts/oqFa8obfyEmvD79Jn/we-are-giving-usd10k-as-forecasting-micro-grants)
I have occasionally advised philanthropic funders—mostly from the effective altruism community—on forecasting related topics and projects.
I have run a few contests:
- [Announcing the Forecasting Innovation Prize](https://forum.effectivealtruism.org/posts/8Nwy3tX2WnDDSTRoi/announcing-the-forecasting-innovation-prize)
- I have also occasionally advised EA funders on forecasting related topics and projects.
- [We are giving $10k as forecasting micro-grants](https://forum.effectivealtruism.org/posts/oqFa8obfyEmvD79Jn/we-are-giving-usd10k-as-forecasting-micro-grants)
- [$1,000 Squiggle Experimentation Challenge](https://forum.effectivealtruism.org/posts/ZrWuy2oAxa6Yh3eAw/usd1-000-squiggle-experimentation-challenge)
- [$5k challenge to quantify the impact of 80,000 hours' top career paths](https://forum.effectivealtruism.org/posts/noDYmqoDxYk5TXoNm/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top)
### Squiggle
I've done bunch of work around Squiggle, a language for creating quick probability estimates. You can read about this [here](https://forum.effectivealtruism.org/topics/squiggle).

View File

@ -1,5 +1,9 @@
## Gossip
_2023/01/29_: I've updated the forecasting and research pages in this website, they should now be a bit more up to date.
_2022/12/09_: Holden Karnofsky doesn't [answer comments](https://twitter.com/NunoSempere/status/1601256399056424960).
_2022/11/20_: I've updated my tally of EA funding @ [Some data on the stock of EA™ funding](https://nunosempere.com/blog/2022/11/20/brief-update-ea-funding/). It's possible that the comparison with the now absent FTX funding might be enlightening to readers.
![](https://i.imgur.com/RwD1pP9.png)
@ -33,3 +37,5 @@ _2022/02/20_: At the recommendation of Misha Yagudin, I buy around $1.5k worth o
_2022/02/18_: I write about what my computer setup looks like, [here](https://forum.effectivealtruism.org/posts/dzPtGwEFiqCFFGLhH/as-an-independent-researcher-what-are-the-biggest?commentId=PeP9LojYxxoWGnJQL)
<p><img src="/gossip/computer-setup.jpg" alt="image of my computer setup" class="img-medium-center"> </p>
PS: I am also on [Twitter](https://twitter.com/NunoSempere).

View File

@ -1,8 +1,3 @@
### Maths
- [Why is the integral which defines the logarithm the inverse of the exponential?](https://nunosempere.github.io/maths-prog/logarithms.pdf)
- [Letter from O. Teichmüller to Landau](https://nunosempere.github.io/maths-prog/teichmuller.html)
- [Mathematicians under the Nazis](https://nunosempere.github.io/projects/mathematicians-under-the-nazis.html)
### Media mentions
@ -18,6 +13,19 @@ Mention in a [large Spanish newspaper](https://elpais.com/tecnologia/2022-03-24/
Mention in [CNN](https://edition.cnn.com/2022/06/28/opinions/nuclear-war-likelihood-probability-russia-us-scoblic-mandel/index.html) (search for "highly regarded forecasters").
[Worried About Nuclear War? Consider the Micromorts](https://www.wired.co.uk/article/micromorts-nuclear-war) on WIRED.
### Some maths stuff
- [Why is the integral which defines the logarithm the inverse of the exponential?](https://nunosempere.github.io/maths-prog/logarithms.pdf)
- [Letter from O. Teichmüller to Landau](https://nunosempere.github.io/maths-prog/teichmuller.html)
- [Mathematicians under the Nazis](https://nunosempere.github.io/projects/mathematicians-under-the-nazis.html)
### Standing offers
- [I will bet on your success on Manifold Markets](https://nunosempere.com/blog/2022/07/05/i-will-bet-on-your-success-or-failure/)
- [Cancellation insurance](https://nunosempere.com/blog/2022/07/04/cancellation-insurance/)
### Truly random
- [Unfair chess](https://nunosempere.github.io/misc/unfairchess.html)

View File

@ -1,11 +1,11 @@
Most of my research can be found [on the EA Forum](https://forum.effectivealtruism.org/users/nunosempere). For forecasting related research, see [forecasting](/forecasting)
The categorization for this page is a work in progress.
## Current research
Besides forecasting (of probabilities), a major thread in my research is estimation (of values). Pieces related to this topic are:
- [An experiment eliciting relative estimates for Open Philanthropys 2018 AI safety grants](https://forum.effectivealtruism.org/posts/EPhDMkovGquHtFq3h/an-experiment-eliciting-relative-estimates-for-open)
- [Valuing research works by eliciting comparisons from EA researchers](https://forum.effectivealtruism.org/posts/hrdxf5qdKmCZNWTvs/valuing-research-works-by-eliciting-comparisons-from-ea)
- [A Critical Review of Open Philanthropys Bet On Criminal Justice Reform](https://forum.effectivealtruism.org/posts/h2N9qEbvQ6RHABcae/a-critical-review-of-open-philanthropy-s-bet-on-criminal)
- [Five steps for quantifying speculative interventions](https://forum.effectivealtruism.org/posts/3hH9NRqzGam65mgPG/five-steps-for-quantifying-speculative-interventions)
- [External Evaluation of the EA Wiki](https://forum.effectivealtruism.org/posts/kTLR23dFRB5pJryvZ/external-evaluation-of-the-ea-wiki)

View File

@ -1,6 +1,6 @@
Most of my software projects can be seen in [my github](https://github.com/NunoSempere/), or on the github of the [Quantified Uncertainty Research Institute](https://github.com/QURIresearch). In recent times, I've been working on [Metaforecast](https://metaforecast.org/), a forecast aggregator, and on [Squiggle](https://www.squiggle-language.com/), a small programming language for estimation.
I'm generally excited about linux development, privacy preserving tools, open source projects, and more generally, software which gives power to the user.
I'm generally excited about Linux development, privacy preserving tools, open source projects, and more generally, software which gives power to the user.
Some miscellaneous programming projects:
@ -8,3 +8,11 @@ Some miscellaneous programming projects:
- [Labeling](https://github.com/NunoSempere/labeling): An R package which I mantain. It's used in ggplot2, through the scales package, and thus has 500k+ downloads a month.
- [Predict, resolve and tally](https://github.com/NunoSempere/PredictResolveTally): A small bash utility for making predictions.
- [Q](https://blogdelecturadenuno.blogspot.com/2020/12/q-un-programa-para-escribir-y-analizar-poemas-y-poesia.html): A program for analyzing Spanish poetry.
- [Rosebud](https://github.com/NunoSempere/rose-browser), my [personal fork](https://nunosempere.com/blog/2022/12/20/hacking-on-rose/) of [rose](https://github.com/mini-rose/rose), which is a simple browser written in C. I've been using this as my mainline browser for a bit now, and enjoy the simplicity.
- [Simple Squiggle](https://github.com/quantified-uncertainty/simple-squiggle), a restricted subset of Squiggle syntax useful for multiplying and dividing lognormal distributions analytically.
- [Time to BOTEC](https://github.com/NunoSempere/time-to-botec): doing simple Fermi estimation in various different programming languages, so far C, R, python, javascript and squiggle.
- [Nuño's stupid node version manager](https://github.com/NunoSempere/nsnvm): Because nvm noticeably slowed down bash startup time, and 20 line of bash can do the job.
- [Werc tweaks](https://github.com/NunoSempere/werc-1.5.0-tweaks). I like the idea behind [werc](https://werc.cat-v.org/), and I've tweaked it a bit when hosting this website
- [German pronoun](https://github.com/NunoSempere/german_pronoun), a small bash script to get the correct gender for german nouns
- [shapleyvalue.com](https://github.com/NunoSempere/shapleyvalue.com)
- Several [turing machines](https://git.nunosempere.com/personal/Turing_Machine), the last of which finds the nth prime.