I'm currently doing [private consulting](https://nunosempere.com/consulting/), and writting up my desillusionment with EA:
- [Some melancholy about the value of my work depending on decisions by others beyond my control](https://nunosempere.com/blog/2023/07/13/melancholy/)
- [Why are we not harder, better, faster, stronger?](https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/)
- [Brief thoughts on CEA’s stewardship of the EA Forum](https://nunosempere.com/blog/2023/10/15/ea-forum-stewardship/)
- [Hurdles of using forecasting as a tool for making sense of AI progress](https://nunosempere.com/blog/2023/11/07/hurdles-forecasting-ai/)
## Past projects
_Estimation of values_
I spent a few years of my life grappling with EA (effective altruism) being nominally about doing the most good, but it not having good tools to identify and prioritize across possible interventions. Eventually I gave up when I got it through my thick head that despite my earlier hopes, there wasn't much demand for the real version of this—as opposed to the fake version of pretending to evaluate stuff, and pretending to be "impact oriented". Still, I think it's an interesting body of research.
- [Five steps for quantifying speculative interventions](https://forum.effectivealtruism.org/posts/3hH9NRqzGam65mgPG/five-steps-for-quantifying-speculative-interventions)
- [A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform](https://forum.effectivealtruism.org/posts/h2N9qEbvQ6RHABcae/a-critical-review-of-open-philanthropy-s-bet-on-criminal)
- [An experiment eliciting relative estimates for Open Philanthropy’s 2018 AI safety grants](https://forum.effectivealtruism.org/posts/EPhDMkovGquHtFq3h/an-experiment-eliciting-relative-estimates-for-open)
- [A flaw in a simple version of worldview diversification](https://nunosempere.com/blog/2023/04/25/worldview-diversification/)
- [Shallow evaluations of longtermist organizations](https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations)
- [2018-2019 Long Term Future Fund Grantees: How did they do?](https://forum.effectivealtruism.org/posts/Ps8ecFPBzSrkLC6ip/2018-2019-long-term-future-fund-grantees-how-did-they-do)
- [Find a beta distribution that fits your desired confidence interval](https://nunosempere.com/blog/2023/03/15/fit-beta/)
- [Valuing research works by eliciting comparisons from EA researchers](https://forum.effectivealtruism.org/posts/hrdxf5qdKmCZNWTvs/valuing-research-works-by-eliciting-comparisons-from-ea)
- [A Bayesian Adjustment to Rethink Priorities' Welfare Range Estimates](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/)
- [Relative Impact of the First 10 EA Forum Prize Winners](https://forum.effectivealtruism.org/posts/pqphZhx2nJocGCpwc/relative-impact-of-the-first-10-ea-forum-prize-winners)
- [An estimate of the value of Metaculus questions](https://forum.effectivealtruism.org/posts/zyfeDfqRyWhamwTiL/an-estimate-of-the-value-of-metaculus-questions)
- [Pathways to impact for forecasting and evaluation](https://forum.effectivealtruism.org/posts/oXrTQpZyXkEbTBfB7/pathways-to-impact-for-forecasting-and-evaluation)
- The [NegativeNuno](https://forum.effectivealtruism.org/users/negativenuno) EA Forum account covers negative criticism that I'm uncertain about.
- A past piece on this topic is [Frank Feedback Given To Very Junior Researchers](https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers)
- [A Bayesian Adjustment to Rethink Priorities' Welfare Range Estimates](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/)
- [A flaw in a simple version of worldview diversification](https://nunosempere.com/blog/2023/04/25/worldview-diversification/)
- [Review of Epoch’s Scaling transformative autoregressive models](https://nunosempere.com/blog/2023/04/28/expert-review-epoch-direct-approach/)
If you care about doing good together, you should care about how to coordinate on who does which projects. Shapley values solve this, as does adjusting the counterfactual value to be more like Shapley values. I named my consultancy after this concept.
You can make progress on gnarly economics questions by [throwing](https://github.com/NunoSempere/ReverseShooting/tree/master) compute at [it](https://github.com/NunoSempere/LaborCapitalAndTheOptimalGrowthOfSocialMovements/tree/master). Unfortunately, reality is complicated enough that these models won't capture all the assumptions you care about, and so might not be all that informative in real life.
- [A Model of Patient Spending and Movement Building](https://forum.effectivealtruism.org/posts/FXPaccMDPaEZNyyre/a-model-of-patient-spending-and-movement-building)
- [Labor, Capital, and the Optimal Growth of Social Movements](https://nunosempere.github.io/ea/MovementBuildingForUtilityMaximizers.pdf)
- [My highly personal skepticism braindump on existential risk from artificial intelligence](https://nunosempere.com/blog/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk/)
- [A concern about the "evolutionary anchor" of Ajeya Cotra's report](https://nunosempere.com/blog/2022/08/10/evolutionary-anchor/)
- [There will always be a Voigt-Kampff test](https://nunosempere.com/blog/2023/01/21/there-will-always-be-a-voigt-kampff-test/)
- [A basic argument for AI risk](https://nunosempere.com/blog/2022/12/23/ai-risk-rohin-shah/)
- [Can GPT-3 produce new ideas? Partially automating Robin Hanson and others](https://nunosempere.com/blog/2023/01/11/can-gpt-produce-ideas/)
- [Review of Epoch's *Scaling transformative autoregressive models*](https://nunosempere.com/blog/2023/04/28/expert-review-epoch-direct-approach/)
- [Straightforwardly eliciting probabilities from GPT-3](https://nunosempere.com/blog/2023/02/09/straightforwardly-eliciting-probabilities-from-gpt-3/)
- [Military Global Information Dominance Experiments](https://www.lesswrong.com/posts/vDvKWdCCNo9moNcMr/us-military-global-information-dominance-experiments)
- [AI race considerations in a report by the U.S. House Committee on Armed Services](https://www.lesswrong.com/posts/87aqBTkhTgfzhu5po/ai-race-considerations-in-a-report-by-the-u-s-house)
- [Why do social movements fail: Two concrete examples](https://forum.effectivealtruism.org/posts/7Pxx7kSQejX2MM2tE/why-do-social-movements-fail-two-concrete-examples)
- [Why did the Spanish Enlightenment movement fail? (1750-1850)](https://nunosempere.github.io/rat/spanishenlightenment)
- [Why did the General Semantics Movement Fail?](https://nunosempere.github.io/rat/general-semantics)
_Global health and development_, and as part of that, _survey-making_
- [A review of two books on survey-making](https://forum.effectivealtruism.org/posts/DCcciuLxRveSkBng2/a-review-of-two-books-on-survey-making)
- [A glowing review of two free online MIT Global Poverty courses](https://forum.effectivealtruism.org/posts/S3vAPRp2XQ9BdDbPz/a-glowing-review-of-two-free-online-mit-global-poverty)
- [EA Mental Health Survey: Results and Analysis.](https://forum.effectivealtruism.org/posts/FheKNFgPqEsN8Nxuv/ea-mental-health-survey-results-and-analysis) (and a more recent [update](https://forum.effectivealtruism.org/posts/GWBsDeQTjFM8YXtrv/2021-ea-mental-health-survey-results?commentId=XQSiuNuiti9BLpmrR#comments=)
I also have some minor pieces which I still want to index:
- [A quantification of a miliNazi](https://nunosempere.github.io/misc/miliNazis)
- [Reasons why upvotes on the EA forum and LW don't correlate that well with impact](https://forum.effectivealtruism.org/posts/GseREh8MEEuLCZayf/nunosempere-s-shortform?commentId=kLuhtmQRZBJpcaHhH)
- [Prizes in the EA Forum and LW](https://forum.effectivealtruism.org/posts/GseREh8MEEuLCZayf/nunosempere-s-shortform?commentId=WPStS4qhJS7Mz6KCA)
- [Thoughts on EA literature](https://forum.effectivealtruism.org/posts/Bc8J5P938BmzBuL9Y/when-can-writing-fiction-change-the-world?commentId=RnEpvpozD5tEEsM9b)