diff --git a/ESPR-Evaluation/1-Current-Evidence.md b/ESPR-Evaluation/1-Current-Evidence.md index fcca64b..92df4fa 100644 --- a/ESPR-Evaluation/1-Current-Evidence.md +++ b/ESPR-Evaluation/1-Current-Evidence.md @@ -77,7 +77,7 @@ Additionally, ESPR gives some of it's alumni the opportunity to come back as Jun As with CFAR's, I think that alumni profiles in the following section provide useful intuitions. However, while perhaps narratively compelling, there is no control group, which is supremely shitty. **These profiles may not allow us to falsify any hypothesis**, i.e., to meaningfully change our priors, because these students come from a pool of incredibly bright applicants. The evidence is weak in that with the current evidence, I would feel uncomfortable saying that ESPR should be scaled up. -To the extent that OpenPhilantropy prefers these and other weak forms of evidence *now*, rather than stronger evidence two-three years later, OpenPhilantropy might be giving ESPR perverse incentives. Note that with 20-30 students per year, even after we start an RCT, there must pass a number of years before we can amass some meaningful statistical power (see the relevant section). On the other hand, taking a process of iterated improvement as an admission of failure would also be pretty shitty. +To the extent that OpenPhilantropy prefers these and other weak forms of evidence *now*, rather than stronger evidence two-three years later, OpenPhilantropy might be giving ESPR perverse incentives. Note that with 20-30 students per year, even after we start an RCT, there must pass a number of years before we can amass some meaningful statistical power (see the power calculations). On the other hand, taking a process of iterated improvement as an admission of failure would also be pretty shitty. The questions designing a RCT poses are hard, but the bigger problem is that there's an incentive to not ask them at all. But that would be agaist CFAR's ethos, as outlined in the introduction.