savepoint.
This commit is contained in:
parent
711f3698dc
commit
41f8e7d56a
|
@ -19,21 +19,22 @@ Suppose that you order the ex-ante values of grants in different cause areas. Th
|
|||
|
||||
For simplicity, let us just pick the case where there are two cause areas:
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-1.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-1.png" class='.img-medium-center' style="width: 50%;" >
|
||||
|
||||
More undilluted shades represent more valuable grants (e.g., larger reductions per dollar: of human suffering, animal suffering or existential risk), and lighter shades represent less valuable grants. Due to diminishing marginal returns, I've drawn the most valuable grants as smaller, though this doesn't particularly matter.
|
||||
|
||||
Now, we can augment the picture by also considering the marginal grants which didn't get funded.
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-2.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-2.png" style="width: 70%;">
|
||||
|
||||
In particular, imagine that the marginal grant which didn't get funded for cause #1 has the same size as the marginal grant that did get funded for cause #2 (this doesn't affect the thrust of the argument, it just makes it more apparent):
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-3.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-3.png" style="width: 70%;">
|
||||
|
||||
Now, from this, we can deduce some bounds on relative values:
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-1.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-1.png" style="width: 70%;">
|
||||
|
||||
|
||||
In words rather than in shades of colour, this would be:
|
||||
|
||||
|
@ -47,7 +48,7 @@ Or, dividing by L1 and L2,
|
|||
|
||||
In colors, this would correspond to all four squares having the same size:
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-2-black-border.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-2-black-border.png" class='.img-medium-center' style="width: 20%;" >
|
||||
|
||||
Giving some values, this could be:
|
||||
|
||||
|
@ -60,22 +61,25 @@ From this we could deduce that 6 reds > 10 greens > 2 reds, or that one green is
|
|||
|
||||
But the above was for one year. Now comes another year, with its own set of grants. But we are keeping the amount we allocate to each area constant.
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/year-1.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/year-1.png" style="width: 100%;">
|
||||
|
||||
|
||||
It's been a less promising year for green, and a more promising year for red, . So this means that some of the stuff that wasn't funded last year for green is funded now, and some of the stuff that was funded last year for red isn't funded now:
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/year-2.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/year-2.png" style="width: 100%;">
|
||||
|
||||
|
||||
Now we can do the same comparisons as the last time:
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-year-2.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-year-2.png" style="width: 20%;"><br>
|
||||
And when we compare them against the previous year
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-both-years.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-both-years.png" style="width: 40%;">
|
||||
|
||||
we notice that there is an inconsistency.
|
||||
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-contradiction.png" class='.img-medium-center'>
|
||||
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-contradiction.png" style="width: 50%;">
|
||||
|
||||
|
||||
### Why is the above a problem
|
||||
|
||||
|
|
103
blog/2023/09/05/manifund-open-philanthropy/.src/application.md
Normal file
103
blog/2023/09/05/manifund-open-philanthropy/.src/application.md
Normal file
|
@ -0,0 +1,103 @@
|
|||
OpenPhil Grant Application
|
||||
==========================
|
||||
|
||||
- Created: August 4, 2023 8:06 PM
|
||||
- Status: submitted
|
||||
|
||||
## Proposal Summary (<20 words)
|
||||
|
||||
Manifund: longtermist grantmaker experimenting with regranting, impact certs and other funding mechanisms.
|
||||
|
||||
## Project Description (<750 words)
|
||||
|
||||
Manifund started in Jan 2023 by building a website for impact certs for the [ACX Mini-Grants round](https://manifund.org/rounds/acx-mini-grants); we then hosted impact certs for the [OpenPhil AI Worldviews contest](https://manifund.org/rounds/ai-worldviews). Since May, we’ve been focused on [a regranting program](https://manifund.org/rounds/regrants). We’d like to scale up our regranting program, and experiment with other funding mechanisms and initiatives.
|
||||
|
||||
Some things that distinguish us from other longtermist funders:
|
||||
|
||||
- We’re steeped in the tech startup mindset: we move quickly, ship often, automate our workflows, and talk to our users.
|
||||
- We’re really into transparency: all projects, writeups, and grant amounts are posted publicly on our site. We think this improves trust in the funding ecosystem, helps grantees understand what funders are interested in, and allows newer grantmakers to develop a legible track record.
|
||||
- We care deeply about grantee experience: we’ve spent a lot of time on the other side of the table applying for grants, and are familiar with common pain points: long/unclear timelines, lack of funder feedback, confusing processes.
|
||||
- We decentralize where it makes sense: regranting and impact certs are both funding models that play to the strengths of networks of experts. Central grantmaker time has been a bottleneck on nurturing impactful projects; we hope to fix this.
|
||||
|
||||
Overall, we hope to help OpenPhil decentralize funding to other decisionmakers in a way that rewards transparent, accountable, timely and cost-effective grants.
|
||||
|
||||
### Scaling up regranting
|
||||
|
||||
Regranting is a system where individuals are given small discretionary budgets to make grants from. This allow regrantors to find projects in their personal and professional networks, seed new projects based on their interests, and commit to grantees with little friction. Our current regrantors are drawn from Anthropic, OpenAI, Rethink Priorities, ARC Evals, CAIS, FAR AI, SERI MATS, 1DaySooner and others; see all regrantors [here](https://manifund.org/rounds/regrants?tab=regrants), and a recent payout report [here](https://manifund.substack.com/p/what-were-funding-weeks-2-4).
|
||||
|
||||
With further funding, we’d like to:
|
||||
|
||||
Onboard new regrantors: we currently have a waitlist of qualified regrantors and get about one new strong application per week. We think we could find many more promising candidates over the next year, to sponsor with budgets of $50k to $400k.
|
||||
Increase budgets of high-performing regrantors: this incentivizes regrantors to make good grants, delegates authority to people who have performed well in the past, and quantifies their track record.
|
||||
Hire another team member: Our team is currently just 2 people, both of us are wearing a lot of hats, and one (Austin) spends ~30% of his time on Manifold. We’re interested in finding a fulltime in-house grantmaker for grant evaluation, regrantor assessment, and public comms. Ideally, this person would take on a cofounder-like role and weigh in on Manifund strategy. Later, we may want to hire another engineer or ops role.
|
||||
|
||||
### Running a larger test of impact certs
|
||||
|
||||
Impact certs are a new mechanism for funding nonprofits. A philanthropic donor announces a large prize; projects compete for the prize and raise funding by selling “impact certs” (shares of prize winnings) to self-interested investors. This allows donors to pay directly for impact, while investors take on risk for return.
|
||||
|
||||
In the spring, we ran [a trial with Scott Alexander](https://manifund.org/rounds/acx-mini-grants) with a $40k prize pool; this round will conclude this September. Scott is still considering whether to do Round 2 of ACX Grants round through impact certs; if so, we’d host that this fall. We also tried an impact cert round with the [OpenPhil AI Worldviews Contest](https://manifund.org/rounds/ai-worldviews), with somewhat less total engagement.
|
||||
|
||||
Our most ambitious goal would be a AI Safety impact cert round with yearly, large (>$1m) prize pool. With less funding, we might experiment with an e.g. EA Art prize to test the waters.
|
||||
|
||||
Other possible initiatives & funding experiments
|
||||
|
||||
- Setting up a Common App for longtermist funding. We’re coordinating with LTFF on this, and would love to include funders like OpenPhil, Lightspeed, SFF and independent donors. This could alleviate a key pain point for grantees (each app takes a long time to write!) and help funding orgs find apps in their area of interest.
|
||||
- Start “EA peer bonuses”, inspired by the Google peer bonus program. We’d let anyone nominate community members who have done valuable work, and pay out small (~$100) prizes.
|
||||
- Run a 1-week “instant grants” program, where we make a really short application (eg 3 sentences and a link) for grants of $1000 and get back to applicants within 24 hours.
|
||||
- Paying feedback prizes to users & regrantors for making helpful comments on projects, both to inform funding decisions and to give feedback to grantees.
|
||||
|
||||
## Why we’re a good fit for this project (300 words)
|
||||
|
||||
We have the right background to pursue philanthropic experiments:
|
||||
|
||||
- Austin founded Manifold, which involves many of the same skills as making Manifund go well: designing a good product, managing people and setting culture, iterating and knowing when to switch directions. Also, Manifold ventures outside the boundaries of the EA community (eg having been cited by Paul Graham and the NYT podcast Hard Fork); Manifund is likewise interested in expanding beyond EA, especially among its donor base. Austin is in a good position to make this happen, as he’s spent most of his professional career in non-EA tech companies (Google) & startups (tech lead at Streamlit), has an impressive background by their lights, and shares many of their sensibilities.
|
||||
- Rachel is early in her career, but has a strong understanding of EA ideas and community, having founded EA Tufts and worked on lots of EA events including EAGx Berkeley, Future Forum, Harvard and MIT AI safety retreats, and GCP workshops. She started web dev 8 months ago, but has gotten top-tier mentorship from Austin, and most importantly, built almost the entire Manifund site herself, which people have been impressed by the design and usability of.
|
||||
|
||||
## Approximate budget
|
||||
|
||||
We’re seeking $5 million in unrestricted funding from OpenPhil. Our intended breakdown:
|
||||
|
||||
- $3m: Regrantor budgets. Raise budgets of current regrantors every few months according to prior performance; onboard new regrantors.
|
||||
- We currently have 15 regrantors with an average budget $120k; we’d like to sponsor 25 regrantors with an average budget $200k.
|
||||
- Our existing regrantors already have a large wishlist of projects they think they can allocate eg above the current LTFF bar; we can provide examples on request.
|
||||
- $1.5m: Impact certs and other funding experiments, as listed in project description.
|
||||
- Our regranting program itself is an example of something we funded out of our discretionary experimental budget.
|
||||
- $0.5m: General operational costs, including salaries for a team of 2-3 FTE, software, cloud costs, legal consultations, events we may want to run, etc.
|
||||
|
||||
In total, Manifund has raised ~$2.4m since inception from an anonymous donor ($1.5m), Future Fund ($0.5m) & SFF ($0.4m) and committed ~$0.8m of it. We intend to further fundraise up to a budget of $10m for the next year, seeking funding from groups such as SFF/Jaan Tallinn, Schmidt Ventures, YCombinator, and small to medium individual donors.
|
||||
|
||||
_Thanks to Joel, Gavriel, Marcus & Renan for feedback on this application._
|
||||
|
||||
## Appendix
|
||||
|
||||
### Cut
|
||||
|
||||
- This allows us to structure the site like a forum, with comments and votes. One vision of what we could be is “like the EA forum but oriented around projects”.
|
||||
- We aim to model transparency ourselves, with our code, finances, and vast majority of meeting notes and internal docs visible to the public.
|
||||
- Our general evaluation model is that regrantors screen for effectiveness, and Manifund screens for legality and safety, but in the case of projects with COIs or with a large portion of funding coming directly from donors, we should really be screening for effectiveness and would like to have someone on board who’s better suited to do that
|
||||
|
||||
### Draft notes
|
||||
|
||||
- in the past did impact certs for ACX and OP AI Worldviews
|
||||
- currently focused on regranting
|
||||
- generally: would be good if EA funding were more diverse. Also faster, more transparent, possibly with better incentives.
|
||||
- interested in scaling up regranting program:
|
||||
- offer more budgets, and raise budgets according to past performance (S-process or something)
|
||||
- looking for funding for regrantors from other sources, particularly interested in getting outside-of-EA funding, think we’re better suited to do that than e.g. LTFF because of Austin’s background and our branding, maybe OpenPhil could just cover ops.
|
||||
- kind of want to hire another person, maybe someone with more grantmaking experience who can act like a reviewer and help us with strategy. Useful especially in cases where a regrantor wants to give to something with a COI or where a large portion of funds are coming from random users/donors instead of regrantors and we want to evaluate grants for real. Currently only ~1.75 people on our team.
|
||||
- and doing more with impact certs: possibly will host Scott’s next ACX round, ultimately interested in something like yearly big prize for AI safety where projects are initially funded through impact certs, and we might do medium things on the way to test whether this is a good idea and refine our approach.
|
||||
- other experiments we’re considering:
|
||||
- CommonApp with LTFF, maybe include lightspeed or other funders
|
||||
- generally work on schemes for better coordination between funders
|
||||
- start EA peer bonus program [EA peer bonus](https://www.notion.so/EA-peer-bonus-2f268e716a5e4f81acb3e9f642f6842f?pvs=21)
|
||||
- do ~week long “instant” grants program [Instant grants](https://www.notion.so/Instant-grants-bf88f0b2ecb142fd890c462fad115037?pvs=21)
|
||||
- paying users/regrantors retroactively for making really helpful comments on projects
|
||||
- some things we’re doing differently from other funders that we think are promising:
|
||||
- being really transparent: some people said in advance and we worried that this would be too limiting, but regrantors haven’t really complained about it so far. And we think it has a bunch of positive externalities: shows people what types of projects grantmakers are interested in funding and why, generates hype/publicity for cool projects (e.g. cavities, shrimp welfare), allows regrantors to generate a public trackrecord, generates trust
|
||||
- relatedly, generally using software: makes it easy to be transparent, also makes it easy to facilitate conversations between grantees and grantmakers and other community members. Makes things faster and smoother, allows us to set defaults. Looks different, might appeal to e.g. rich tech ppl more.
|
||||
- general attitude with Austin’s background also may be different/advantageous: move fast, do lots of user interviews and generally focus a lot on user experience
|
||||
- giving small budgets to somewhat less well-known people so they can build up a track record
|
||||
|
||||
- some things we’re doing less well than other funders:
|
||||
- generally being really careful/optimizing really hard about where the money goes or something. We heavily outsource donation decisions to regrantors, and ultimately just screen for legality/non-harmfulness. An analogy we use a lot is to the FDA’s approval process: Manifund covers safety, regrantors cover efficacy. Currently there aren’t that many regrantors and just having them in a discord channel together facilitates lots of the good/necessary coordination, so not that worried about unilateralists curse rn, but could be a problem later. We aren’t screening $50k regrantors that rigorously in advance: we take applications, do an interview, ask the community health team, talk about it…but ultimately we’re pretty down to take bets. This means we probably want to up budgets more carefully.
|
||||
- some ops stuff. Don’t have a good way of sending money internationally.
|
26
blog/2023/09/05/manifund-open-philanthropy/index.md
Normal file
26
blog/2023/09/05/manifund-open-philanthropy/index.md
Normal file
|
@ -0,0 +1,26 @@
|
|||
Quick thoughts on Manifund's application to Open Philanthropy
|
||||
=============================================================
|
||||
|
||||
[Manifund](https://manifund.org/) is a new effort to improve, speed up and decentralize funding mechanisms in the broader Effective Altruism community, by some of the same people previously responsible for [Manifold](https://manifold.markets/home). Due to Manifold's policy of making a bunch of their internal documents public, you can see their application to Open Philanthropy [here](https://manifoldmarkets.notion.site/OpenPhil-Grant-Application-3c226068c3ae45eaaf4e6afd7d1763bc) (also a markdown backup [here](https://nunosempere.com/blog/2023/09/05/manifund-open-philanthropy/.src/application)).
|
||||
|
||||
Here is my perspective on this:
|
||||
|
||||
- They have given me a $50k regranting budget. It seems plausible that this colors my thinking.
|
||||
- Manifold is highly technologically competent.
|
||||
- [Effective Altruism Funds](https://funds.effectivealtruism.org/), which could be the closest point of comparison to Manifund, is not highly technologically competent. In particular, they have been historically tied to Salesforce, a den of mediocrity that slows speed, makes interacting with their systems annoying, and isn't that great across any one dimension.
|
||||
- Previously, Manifold blew [Hypermind](https://predict.hypermind.com/hypermind/app.html), a previous play-money prediction market, completely out of the water. Try browsing markets, searching markets, making a prediction on Hypermind, and then try the same thing in Manifold.
|
||||
- It seems very plausible to me that Manifund could do the same thing to CEA's Effective Altruism Funds: Create a product that is incomparably better by having a much higher technical and operational competence.
|
||||
- One way to think about the cost and value of Manifund would be Δ(value of grant recipients) - Δ(costs of counterfactual funding method).
|
||||
- The cost is pretty high, because Austin's counterfactual use of his capable engineering labour is pretty valuable.
|
||||
- Value is still to be determined. One way might be to compare the value of grants made in 2023 year by Manifund, EA Funds, SFF, Open Philanthropy, etc., and see if there are any clear conclusions.
|
||||
- Framing this as "improving EA Funds" would slow everything down and make it more mediocre, and would make Manifund less motivated by reducing their sense of ownership, so it doesn't make sense as a framework.
|
||||
- Instead, it's worth keeping in mind that Manifund has the option to incorporate aspects of EA funds if it so chooses—like some grantmakers, questions to prospective grantees, public reports, etc.
|
||||
- Manifund also has the option of identifying and then unblocking historical bottlenecks that EA funds has had, like slow response speed, not using grantmakers who are already extremely busy, etc.
|
||||
|
||||
A funny thing is that Manifund itself can't say, and probably doesn't think of their pathway to impact as: do things much better than EA funds by being absurdly more competent than them. It would look arrogant if they said it. But I can say it!
|
||||
|
||||
<p>
|
||||
<section id='isso-thread'>
|
||||
<noscript>javascript needs to be activated to view comments.</noscript>
|
||||
</section>
|
||||
</p>
|
|
@ -37,3 +37,4 @@
|
|||
- [squiggle.c](https://nunosempere.com/blog/2023/08/01/squiggle.c)
|
||||
- [Webpages I am making available to my corner of the internet](https://nunosempere.com/blog/2023/08/14/software-i-am-hosting)
|
||||
- [Incorporate keeping track of accuracy into X (previously Twitter)](https://nunosempere.com/blog/2023/08/19/keep-track-of-accuracy-on-twitter)
|
||||
- [Quick thoughts on Manifund's application to Open Philanthropy](https://nunosempere.com/blog/2023/09/05/manifund-open-philanthropy)
|
||||
|
|
|
@ -56,17 +56,16 @@ If the above sounds interesting, I'm happy to be reached out to.
|
|||
|
||||
### Rates
|
||||
|
||||
I value getting hired for more hours, because each engagement has some overhead cost. Therefore, I am discounting buying a larger number of hours.
|
||||
I value getting hired for more hours, because each engagement has some negotiation, preparation and administrative burden. Therefore, I am discounting buying a larger number of hours.
|
||||
|
||||
| # of hours | Cost | Example |
|
||||
|------------|-------|------------------------------------------------------------------------------------------------|
|
||||
| 1 hour | ~$250† | Talk to me for an hour about a project you want my input on, organize a forecasting workshop |
|
||||
| 10 hours | ~$2k | Research that draws on my topics of expertise, where I have already thought about the topic, and just have to write it down. For example, [this Bayesian adjustment to Rethink Priorities](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/) |
|
||||
| 100 hours | ~$15k | An [evaluation of an organization](https://forum.effectivealtruism.org/posts/kTLR23dFRB5pJryvZ/external-evaluation-of-the-ea-wiki), an early version of [metaforecast](https://metaforecast.org), two editions of the [forecasting newsletter](https://forecasting.substack.com/) |
|
||||
| 1 hour (discounted) | $100 | You, an early career person, talk with me for an hour about a career decision you are about to make, about a project you want my input on, etc. |
|
||||
| 2 hours | $500 \|\| 10% chance of $4k | You, a titan of industry, talk with me for an hour about a project you want my input on, before which I spend an hour thinking about it. Or, you have me organize a forecasting workshop for your underlings, etc. |
|
||||
| 10 hours | $2k | Research that draws on my topics of expertise, where I have already thought about the topic, and just have to write it down. For example, [this Bayesian adjustment to Rethink Priorities](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/) |
|
||||
| 100 hours | $15k | An [evaluation of an organization](https://forum.effectivealtruism.org/posts/kTLR23dFRB5pJryvZ/external-evaluation-of-the-ea-wiki), an early version of [metaforecast](https://metaforecast.org), two editions of the [forecasting newsletter](https://forecasting.substack.com/) |
|
||||
| 1000 hours | reach out | Large research project, an ambitious report on a novel topic, the current iteration of [metaforecast](https://metaforecast.org) |
|
||||
|
||||
†: or a 10% chance of $2k. 1h just has too high of an administrative burden to be worth it alone.
|
||||
|
||||
### Description of client
|
||||
|
||||
My ideal client would be someone or some organization who is producing value in the world, and which wants me to identify ways they could do even better in an open-ended way. Because this context would be assumed to be highly collaborative, they would have a high tolerance for disagreeableness. A close second would be someone who is making an important decision commissioning me to estimate the value of the different options they are considering.
|
||||
|
|
Loading…
Reference in New Issue
Block a user