Why are we not harder, better, faster, stronger? ================================================
Cover for *Harder, Better, Faster, Stronger*, by [Daft Punk](https://www.youtube.com/watch?v=yydNF8tuVmU)
In [The American Empire has Alzheimer's](https://forecasting.substack.com/i/47744682/the-american-empire-has-alzheimers), we saw how the US had repeatedly been rebuffing forecasting-style feedback loops that could have prevented their military and policy failures. In [A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform](https://forum.effectivealtruism.org/posts/h2N9qEbvQ6RHABcae/a-critical-review-of-open-philanthropy-s-bet-on-criminal), we saw how Open Philanthropy, a large foundation, spent and additional $100M in a cause they no longer thought was optimal. In *A Modest Proposal For Animal Charity Evaluators (ACE)* (unpublished), we saw how ACE had moved away from quantitative evaluations, reducing their ability to find out which animal charities were best. In [External Evaluation of the Effective Altruism Wiki](https://forum.effectivealtruism.org/posts/kTLR23dFRB5pJryvZ/external-evaluation-of-the-ea-wiki), we saw someone spending his time less than maximally ambitiously. In *My experience with a Potemkin Effective Altruism group* (unpublished), we saw how an otherwise well-intentioned group of decent people mostly just kept chugging along producing a negligible impact on the world. As for my own personal failures, I just come out of having spent the last couple of years making a bet on ambitious value estimation that [flopped](https://nunosempere.com/blog/2023/07/13/melancholy) in comparison to what it could have been. I could go on. Those and all other failures could have been avoided if only those involved had just been harder, better, faster, stronger. I like the word "formidable" as a shorthand here. In this post, I offer some impressionistic, subpar, incomplete speculation about why my civilization, the people around me, and myself are just generally not as formidable as we could maximally be. Why are we not more awesome? Why are we not attaining the heights that might be within our reach? These hypotheses are salient to me: 1. Today's cultural templates and default pipelines don't create formidable humans. 2. Other values, like niceness, welcomingness, humility, status, tranquility, stability, job security and comfort trade off against formidability. 3. In particular, becoming formidable requires keeping close to the truth, but convenient lies and self-deceptions are too useful as tools to attain other goals. 4. Being formidable at a group level might require exceptional leaders, competent organizational structures, or healthy community dynamics, which we don't have. I'll present these possible root causes, and then suggest possible solutions for each. My preferred course of action would be to attack this bottleneck on all fronts. ## 1. Cultural templates and pipelines ### 1.1. Pipelines I notice that the default life trajectories I've been exposed to—e.g., grow up, go to university, have a career, marry, have kids, get a dog—tend to produce harmless and inoffensive humans. This is, to some extent, understandable; maybe if we had more powerful humans, they would tend to enter wasteful conflicts between themselves, or oppress the less powerful people below them. But one could also imagine a society of extremely capable humans that were all united in the pursuit of awesome goals: exploring space, eradicating all diseases, defeating an alien invasion, mastering nature, exploring the many shapes of consciousness, making suffering vanish, attaining immortality, achieving wild levels of wealth, etc. To some extent, classic science-fiction provides templates of lives lived more like the second paragraph, rather than like the first. My personal understanding of the concept of privilege, in the social justice sense, has an element of richer kids having access to better strategies to imitate: learning English, creating one's own business, making friends who are productive in a capitalist society, and so on. But this still falls very much short of peak awesomeness. This might suggests that it could be of value to create and promote literature and media that gives sketches of strategies and life trajectories that could be fruitfully imitated, that are better than existing life templates. One such piece might be [A Message to Garcia](https://courses.csail.mit.edu/6.803/pdf/hubbard1899.pdf). Possible next steps: 1. Anayyze Effective Altruism, its goals and actions, through the lense of Girard's theory of mimesis. 2. Greatly increase the production of beneficial memes. Attempt to narrate a better humanity into existence. 3. Make bibliographies and life histories of great people more widely available. 4. Interview a few of the most formidable people you can get your hands on. 5. Produce fiction which captures and distills better life trajectories. 6. Items 1-5, but for trajectories that are realistically achievable by many people. ### 1.2. Role models I can pinpoint the people who had the most influence on my proffessional career. They are a bit older than me, and I had good enough models of their work that I could at times steal some of their strategies. Some of the strategies I've copied are: explore the rationality community, get into EA, become an independent researcher, go to a summer fellowship at the Future of Humanity Institute, work on forecasting research, or incorporate programming into research. And so at a conference earlier last year, it turned out that I was not the only Spanish effective altruist forecasting researcher/programmer with a beard. But I'm more disagreeable, less social, and less into AI than both Jacob and Jaime, and don't work on the same sub-field, which means that I can't implement quite the same strategies. And so as time goes on, it becomes harder for me to find mentors or role models, i.e., people I know well enough and are similar enough that I can mimic their strategies. In general, seems plausible that people's role models could have a large influence on their life trajectories. So perhaps we could make people better, harder, faster, stronger, by bringing better role models to their attention, or by more directly matching them with better mentors. I think that substantial efforts could be fruitfully spent in this area. Possible next steps: - Implement better mechanisms to find mentors or role models. ### 1.3. A taboo against strong optimizers There is a thread which sees extraordinary people described as alien and monstruous. The foremost Spanish playwright was described as a ["monster of nature"](https://en.wikipedia.org/wiki/Lope_de_Vega). The genius Hungarian mathematicians of the 20th century are referred to as [the Martians](). Super-geniuses in fiction are often cast as super-villains. We also observe a bunch of social mechanisms that ensure that humans remain mostly mediocre: these have been satirized under the [Law of Jante](https://en.wikipedia.org/wiki/Law_of_Jante). Within EA, Owen Cotton-Barratt (since disgraced) and Holden Karnofsky have been writting about [the](https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous) [pitfalls](https://forum.effectivealtruism.org/posts/5o3vttALksQQQiqkv/consequentialists-in-society-should-self-modify-to-have-side) of [strong](https://forum.effectivealtruism.org/posts/yJBhuCjqvJyd7kW7a/perils-of-optimizing-in-social-contexts) [optimization](https://forum.effectivealtruism.org/posts/cDv9mP5Yoz4k4fSbW/don-t-over-optimize-things). My sense is that there is something about intensity, single-mindedness, agency and power that people at times find scary and at times deeply attractive in other humans. What they don't find as scary is a boy-scout in tights with super-force, and so we get a Superman who doesn't topple suboptimal nations and instead spends half his time live-action role-playing as a journalist. Oh the waste. I think that this fear of powerful humans is more a symptom than a cause. Still, I would like to see more portrayals of powerful beings as being good, and good at using strategies which are achievable and replicable. It's also possible that the fear of powerful humans was pretty much justified in the past, in which case I would instead want to see better ways to align powerful humans, or some thinking about whether powerful humans in our current environment do in expectation more harm than good. One story one could tell is that in previous century powerful people mostly were so because they had power directly over their servants, slaves and serfs—but that in the current capitalist market economy, powerful humans mostly are so because they have produced great value. But looking at, for instance, Bill Gates, I actually don't know whether Microsoft has done more harm than good compared to what would otherwise have happened. There are other possible stories in this vein, e.g., maybe trying to become more formidable requires removing some type of load-bearing constraint, but then once that is removed, people fall into existential dread or into psychopathy. I mostly don't buy it though. Possible next steps: - Figure out whether powerful humans do more good or more harm in our current environment - If they powerful humans do more good than harm, work to change the perceptions of power so that becoming greatly powerful seems more desirable, and work on becoming greatly more powerful ourselves. - If they do more harm than good in general, figure whether we in particular want to be more powerful, or more meek. ## 2. Value tradeoffs There are many things humans can want besides being really formidable, like niceness, welcomingness, humility, status, tranquility, stability, job security or comfort. This might a legitimate value difference. But sometimes humans do mypically go down the path of lower formidability in a way which will sabotage their future ambitions. ### 2.1. For want on an asshole, shit accumulated: Frank feedback probably an underprovided public good. > Tell me lies > > Tell me sweet little lies > > Tell me lies, tell me, tell me lies
Fleetwood Mac, [Little lies](https://dumb.nunosempere.com/Fleetwood-mac-little-lies-lyrics)
In [Frank Feedback to Junior Researchers](https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers#comments), I outlined a few basic mistakes that junior researchers tend to make. But my sense is that many times, there isn't someone to point out the obvious flaws because the people involved just don't want to pay the cost of disagreeableness. And so mistakes take root, careers don't advance as far, castles are built in the sand and disappointment is postponed another day. I think this is a pretty general pattern, where people are just pretty afraid of conflict, and don't really know how to disagree peacefully. I don't really have an easy fix for this. Something which annoys me here is that sometimes people understand this dynamic, and make conflict more expensive, and act like fools demanding to be appeased. One example here might be Galileo's conflict with the Catholic Church about the Earth being round. A more recent example might be the Commodity and Futures Trading Commission in the US delaying prediction markets by a few decades. Here are a few dynamics I've noticed around criticism, ordered as bullet points: 1. When producing criticism, there is some probability that the criticism will be ignorant or misguided. You can reduce the error rate at the cost of greater effort. 2. Recipients of criticism often don't care about its content. 3. Recipients of criticism often perceive it as an attack. 4. If the target of criticism can choose what standard of criticism is acceptable, they will tend to choose very high standards. 5. Targets of criticism can insulate themselves against criticism by pointing out that its form or shape—as oppposed to its content—is flawed 6. Producing criticism which deals with the above is expensive. 7. In practice, producing criticism ends up being expensive. 8. People also fear appearing disagreeable. 9. In practice, "doing criticism well" has a strong component of acknowledging and appeasing power. 8. Overall, this results in a suboptimal amount of criticism being produced, in contrast with the socially optimal rate. 9. A solution to this is [Crocker's Rules](http://sl4rg/crocker.html): Publizicing one's willingness to receive more flawed criticism. 10. At an institutional level, Crocker's rules would have to be adopted by those ultimately resposible for a given thing. 11. Many times, there isn't someone who is ultimately responsable for things. Though sometimes there is a CEO or a board of directors. 12. When Crocker's rules are adopted, malicious, or merely status-seeking actors could exploit them to tarnish reputations or to quickly raise their status at the expense of others. 13. In practice, my sense is that the balance of things still points towards Crocker's rules being beneficial 14. While people who care should adopt Crocker's rules, this isn't enough to deal with all the bullshit, and so more steps are needed. ### 2.2. The "hardcore optimizer" hypothesis Here is something which I've been thinking about as the "hardocore optimizer hypothesis": > An actor which is under no constraints is infinitely more powerful than one neutered by many restrictions Here are some constraints that I think that Open Philanthropy (a large foundation which is the main funder to the Effective Altruism social movement) is operating under: - Legality: They are choosing not to take actions which would be illegal according to the US law. - The Overton Window: They are choosing for their actions to not be "beyond the pale" according to the 21st century American lefty milieu. - Responsability and loyalty towards their own staff: They are choosing to keep their staff around from year to year, rather than aggressively restructuring. - Responsability and loyalty towards past grantees: Open Philanthropy will choose to exit a cause area gracefully, rather than leaving it at once. - The "optimization is dangerous" constraint: They are choosing to not go full-steam on any one perspective, but rather to proceed cautiously and hedge their bets. - Bounded trust: Open Philanthropy takes long to trust people, which means that they have limited staff which has limited attention and is often busy. - Explainability: Open Philanthropy employees lower down the totem pole probably can't take actions or recommend grants that their superiors can't understand - Do not look like assholes: Open Philanthropy generally wants to appear to be nice people. And here are a few actions that I think are unavailable to Open Philanthropy because of the above constraints: - The legal constraint: Winfried Stöcker [developed his own COVID vaccine and gave it to an estimated 20k people](https://www.irishtimes.com/news/world/europe/german-doctor-faces-charges-after-administering-thousands-of-self-made-vaccines-1.4742040?fbclid=IwAR2mGTMzHpDocl3UQmlOQmmMAUqe42lRItaE_MJkmaONWj3S9SIJ4C97Yf8). - The Overton Window: Peter Thiel supported Trump in the Republican primary, and thereafter got some amount of influence in the beginning of the Trump administration. - Responsability and loyalty towards their own staff: This cuts off the option to have a Bridgewater-style churn, where you rate people intensely and then fire the bottom ¿10%? of the population every year. - Responsability and loyalty towards past grantees: This increases the cost of entering a new area, and thus cuts off the possibility of exploring many areas at once with fewer strings attached. - The "optimization is dangerous" constraint: Personally I would love to see investment in advanced forecasting and evaluation systems that would seek to equalize the values of marginal grants across cause ares. But this doesn't jibe with their "wordview diversification" approach, and so it isn't happening, or is happening slowly rather than having happened fast already. - Bounded trust: Open Philanthropy isn't willing to have a regranting sytem, like that of the FTX Future Fund. Their forecasting grantmaking in the past has also been sluggish and improvable. - Explainability: Hard to give an example here, since I don't really have that much insight into their inner workings. - Do not look like assholes: This restricts OpenPhilanthropy's ability to call out people on bullshit, offer large monetary bets, or generally disagree with the Democratic party. And so by the time you are bound by half of those constraints, you might end up moving slowly and suboptimally. Perhaps this explains how come Open Philanthropy donated to Hypermind—a forecasting platform which I know for having a really terrible UX which didn't allow for wide enough distributions, and using an [aggregation method which ignored probabilities below 5%](https://docs.google.com/document/d/1fRg7twB2RLAc-Ey8NUj5qFUJCg-dp3yb/edit) for [predicting the state of AI in 2030](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=AI2030). Instead, they could have invested in the early iteration of Manifold Markets, an innovative venture by former Google engineers with an UX infinitely superior to that of Hypermind. Hypermind isn't 60% as valuable as Manifold, it's maybe 0.1% to 2%. I think that's what happens when you operate under many constraints: you neuter your ability to shape the world. ## That's all I have for now. So cultural templates discourage us from being more formidable, and then value tradeoffs and self-imposed constraints diminish our ability to influence the world. My primary suggestion is to not trade formidability and truth-seeking for other values, like comfort and social harmony—or at least to make that tradeoff consciously and sparingly. I haven't yet considered leaders. Leaders can inspire, coordinate and direct a movement. And yet sometimes we don't get the leaders we need, but have to work with those we have right now. It's not clear to me how to discuss the topic tactfully, without character-asssassinating anyone. So I'm leaving the topic alone for now. In the meantime, the hypotheses that I've covered don't seem exhaustive. So I'm really curious about readers' own thoughts. Comments are open.