Notes on worldview diversification

The main flaw, clearly stated

Deducing bounds for relative values from revealed preferences

Suppose that you order the ex-ante values of grants in different cause areas. The areas could be global health and development, animal welfare, longtermism, etc. And their values could be given in QALYs, sentience-adjusted QALYs, or expected reduction in existential risk.

For simplicity, let us just pick the case where there are two cause areas:

More undilluted shades represent more valuable grants (e.g., larger reductions per dollar: of human suffering, animal suffering or existential risk), and lighter shades represent less valuable grants. Due to diminishing marginal returns, I've drawn the most valuable grants as smaller, though this doesn't particularly matter.

Now, we can augment the picture by also considering the marginal grants which didn't get funded.

In particular, imagine that the marginal grant which didn't get funded for cause #1 has the same size as the marginal grant that did get funded for cause #2 (this doesn't affect the thrust of the argument, it just makes it more apparent):

Now, from this, we can deduce some bounds on relative values:

In words rather than in shades of colour, this would be:

Or, dividing by L1 and L2,

In colors, this would correspond to all four squares having the same size:

Giving some values, this could be:

So simplifying a bit, we could deduce that 6 reds > 10 greens > 2 reds

But now there comes a new year

But the above was for one year. Now comes another year, with its own set of grants. But we are keeping the amount we allocate to each area constant.

It's been a less promising year for green, and a more promising year for red, . So this means that some of the stuff that wasn't funded last year for green is funded now, and some of the stuff that was funded last year for red isn't funded now:

Now we can do the same comparisons as the last time:

And when we compare them against the previous year

we notice that there is a contradiction

Why is the above a problem

The above is a problem not only because there is an inconsistency, but because there is a pareto improvement: transfer funding from cause area #2 to cause #1 in the first year, and viceversa in year #2, and you will get both more green and more red. It is also an inelegant state of affairs to be in, which is a strong hint that more Pareto improvements like the above can happen.

[to do: review of when this happened in OPs history]

With this in mind, we can review some alternatives.

Review of alternatives

Keep a "moral parliament" approach, but allow for trades in funding.

Worldview diversification might stem from a moral-parliament style set of values, where one's values aren't best modelled as a unitary agent, but rather as a parliament of diverse agents. And yet, the pareto improvement argument still binds. A solution might be to start with a moral parliament, but allow trades in funding from different constituents of the parliament. More generally, one might imagine that given a parliament, that parliament might choose to become a unitary agent, and adopt a fixed, prenegotiated exchange rate between red and green.

Calculate and equalize relative values

Alternatively, worldview diversification can be understood as an attempt to approximate expected value given a limited ability to estimate relative values. If so, then the answer might be to notice that worldview-diversification is a fairly imperfect approximation to any kind of utilitarian/consequentialist expected value maximization, and to try to more perfectly approximate utilitarian/consequentialist expected value maximization. This would involve estimating the relative values of projects in different areas, and attempting to equalize marginal values across cause areas and across years.

[To do: more options]

Challenges

Throughout, we have assumed that we can estimate:

The problem with this is that this is currently not possible. My impression is that estimation:

My recommendation here would be to invest in relative value estimation, across the Quantified Uncertainty Research Institute (the organization that I work at), possibly Rethink Priorities if they have capacity, academia, etc.

One problem here is that after this estimation is investigated and implemented, the efficiency gains might not be worth the money spent on estimation. My sense is that would not be the case, because OP is a large foundation with billions of dollars and this work would cost < $10M. But it's a live option