typos fixed.

This commit is contained in:
Nuno Sempere 2023-10-22 13:52:23 +00:00
parent 026501bb14
commit 824cd87850

View File

@ -56,7 +56,7 @@ Possible next steps:
### 1.3. A taboo against strong optimizers ### 1.3. A taboo against strong optimizers
There is a thread which sees extraordinary people described as alien and monstruous. The foremost Spanish playwright was described as a ["monster of nature"](https://en.wikipedia.org/wiki/Lope_de_Vega). The genius Hungarian mathematicians of the 20th century are referred to as [the Martians](<https://en.wikipedia.org/wiki/The_Martians_(scientists)>). Super-geniuses in fiction are often cast as super-villains. We also observe a bunch of social mechanisms that ensure that humans remain mostly mediocre: these have been satirized under the [Law of Jante](https://en.wikipedia.org/wiki/Law_of_Jante). Within EA, Owen Cotton-Barratt (since disgraced) and Holden Karnofsky have been writting about [the](https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous) [pitfalls](https://forum.effectivealtruism.org/posts/5o3vttALksQQQiqkv/consequentialists-in-society-should-self-modify-to-have-side) of [strong](https://forum.effectivealtruism.org/posts/yJBhuCjqvJyd7kW7a/perils-of-optimizing-in-social-contexts) [optimization](https://forum.effectivealtruism.org/posts/cDv9mP5Yoz4k4fSbW/don-t-over-optimize-things). There is a thread which sees extraordinary people described as alien and monstrous. The foremost Spanish playwright was described as a ["monster of nature"](https://en.wikipedia.org/wiki/Lope_de_Vega). The genius Hungarian mathematicians of the 20th century are referred to as [the Martians](<https://en.wikipedia.org/wiki/The_Martians_(scientists)>). Super-geniuses in fiction are often cast as super-villains. We also observe a bunch of social mechanisms that ensure that humans remain mostly mediocre: these have been satirized under the [Law of Jante](https://en.wikipedia.org/wiki/Law_of_Jante). Within EA, Owen Cotton-Barratt (since disgraced) and Holden Karnofsky have been writing about [the](https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous) [pitfalls](https://forum.effectivealtruism.org/posts/5o3vttALksQQQiqkv/consequentialists-in-society-should-self-modify-to-have-side) of [strong](https://forum.effectivealtruism.org/posts/yJBhuCjqvJyd7kW7a/perils-of-optimizing-in-social-contexts) [optimization](https://forum.effectivealtruism.org/posts/cDv9mP5Yoz4k4fSbW/don-t-over-optimize-things).
My sense is that there is something about intensity, single-mindedness, agency and power that people at times find scary and at times deeply attractive in other humans. What they don't find as scary is a boy-scout in tights with super-force, and so we get a Superman who doesn't topple suboptimal nations and instead spends half his time live-action role-playing as a journalist. Oh the waste. My sense is that there is something about intensity, single-mindedness, agency and power that people at times find scary and at times deeply attractive in other humans. What they don't find as scary is a boy-scout in tights with super-force, and so we get a Superman who doesn't topple suboptimal nations and instead spends half his time live-action role-playing as a journalist. Oh the waste.
@ -109,7 +109,7 @@ Here are a few dynamics I've noticed around criticism, ordered as bullet points:
8. Overall, this results in a suboptimal amount of criticism being produced, in contrast with the socially optimal rate. 8. Overall, this results in a suboptimal amount of criticism being produced, in contrast with the socially optimal rate.
9. A solution to this is [Crocker's Rules](http://sl4.org/crocker.html): Publicizing one's willingness to receive more flawed criticism. 9. A solution to this is [Crocker's Rules](http://sl4.org/crocker.html): Publicizing one's willingness to receive more flawed criticism.
10. At an institutional level, Crocker's rules would have to be adopted by those ultimately responsible for a given thing. 10. At an institutional level, Crocker's rules would have to be adopted by those ultimately responsible for a given thing.
11. Many times, there isn't someone who is ultimately responsable for things. Though sometimes there is a CEO or a board of directors. 11. Many times, there isn't someone who is ultimately responsible for things. Though sometimes there is a CEO or a board of directors.
12. When Crocker's rules are adopted, malicious, or merely status-seeking actors could exploit them to tarnish reputations or to quickly raise their status at the expense of others. 12. When Crocker's rules are adopted, malicious, or merely status-seeking actors could exploit them to tarnish reputations or to quickly raise their status at the expense of others.
13. In practice, my sense is that the balance of things still points towards Crocker's rules being beneficial. 13. In practice, my sense is that the balance of things still points towards Crocker's rules being beneficial.
14. While people who care should adopt Crocker's rules, this isn't enough to deal with all the bullshit, and so more steps are needed. 14. While people who care should adopt Crocker's rules, this isn't enough to deal with all the bullshit, and so more steps are needed.
@ -137,7 +137,7 @@ And here are a few actions that I think are unavailable to Open Philanthropy bec
- The Overton Window: Peter Thiel supported Trump in the Republican primary, and thereafter got some amount of influence in the beginning of the Trump administration. - The Overton Window: Peter Thiel supported Trump in the Republican primary, and thereafter got some amount of influence in the beginning of the Trump administration.
- Responsability and loyalty towards their own staff: This cuts off the option to have a Bridgewater-style churn, where you rate people intensely and then fire the bottom ¿10%? of the population every year. - Responsability and loyalty towards their own staff: This cuts off the option to have a Bridgewater-style churn, where you rate people intensely and then fire the bottom ¿10%? of the population every year.
- Responsability and loyalty towards past grantees: This increases the cost of entering a new area, and thus cuts off the possibility of exploring many areas at once with fewer strings attached. - Responsability and loyalty towards past grantees: This increases the cost of entering a new area, and thus cuts off the possibility of exploring many areas at once with fewer strings attached.
- The "optimization is dangerous" constraint: Personally I would love to see investment in advanced forecasting and evaluation systems that would seek to equalize the values of marginal grants across cause areas. But this doesn't jibe with their "wordview diversification" approach, and so it isn't happening, or is happening slowly rather than having happened fast already. - The "optimization is dangerous" constraint: Personally I would love to see investment in advanced forecasting and evaluation systems that would seek to equalize the values of marginal grants across cause areas. But this doesn't jibe with their "worldview diversification" approach, and so it isn't happening, or is happening slowly rather than having happened fast already.
- Bounded trust: Open Philanthropy isn't willing to have a regranting system, like that of the FTX Future Fund. Their forecasting grantmaking in the past has also been sluggish and improvable. - Bounded trust: Open Philanthropy isn't willing to have a regranting system, like that of the FTX Future Fund. Their forecasting grantmaking in the past has also been sluggish and improvable.
- Explainability: Hard to give an example here, since I don't really have that much insight into their inner workings. - Explainability: Hard to give an example here, since I don't really have that much insight into their inner workings.
- Do not look like assholes: This restricts Open Philanthropy's ability to call out people on bullshit, offer large monetary bets, or generally disagree with the Democratic party. - Do not look like assholes: This restricts Open Philanthropy's ability to call out people on bullshit, offer large monetary bets, or generally disagree with the Democratic party.