nunosempere.com/blog/2021/09/01/frank-feedback-given-to-very-junior-researchers/index.md

65 lines
9.5 KiB
Markdown
Executable File

Frank Feedback Given To Very Junior Researchers
==============
Over the last year, I have found myself giving feedback on various drafts, something that I'm generally quite happy to do. Recently, I got to give two variations of this same feedback in quick succession, so I noticed the commonalities, and then realized that these commonalities were also present on past pieces of feedback. I thought I'd write the general template up, in case others might find it valuable.
## High level comments
* You are working at the wrong level of abstraction and depth / you are biting more than you can chew / being too ambitious.
* In particular, the questions that you analyze are likely to have many cruxes, i.e, factors that might change the conclusion completely. But you only identify a few such cruxes, and thus your analysis doesn't seem likely to be that robust.
* I guess that the opposite error is possible—focus too much on one specific scenario which isn't that likely to happen. I just haven't seen it as much, and it doesn't seem as crippling when it happens.
* Because you're being too ambitious, you don't have the tools necessary to analyze what you want to analyze, and to some extent those tools may not exist.
* Compare with: [Forecasting transformative AI timelines using biological anchors](https://www.lesswrong.com/posts/cxQtz3RP4qsqTkEwL/an-121-forecasting-transformative-ai-timelines-using), [Report on Semi-informative Priors on Transformative AI](https://www.openphilanthropy.org/blog/report-semi-informative-priors) or [Invertebrate Sentience: Summary of findings](https://forum.effectivealtruism.org/posts/JqnJMeyNq7KwMWKeS/invertebrate-sentience-summary-of-findings-part-1), which are much more constrained and have specific technical/semi-technical intellectual tool suited to the job (comparison with biological systems, variations on Laplace's law and other priors, markers of consciousness like reaction to harmful stimuli). You don't have an equivalent technical tool.
* There is a missing link between the individual facts you outline, and the conclusions you reach (e.g., about \[redacted\] and \[redacted\]). I think that the correct thing to do here is to sit with the uncertainty, or to consider a range of scenarios, rather than to reach one specific conclusion. Alternatively, you could highlight that different dynamics could still be possible, but that on the balance of probabilities, you personally think that your favored hypothesis is more likely.
* But in that case, it's be great if you more clearly defined your concepts and then expressed your certainty in terms of probabilities, because those are easier to criticize or put to the test, or even notice that there is a disagreement to be had.
## Judgment calls
* I get the impression that you rely too much on secondary sources, rather than on deeply understanding what you're talking about.
* You are making the wrong tradeoff between formality and ¿clarity of thought?
* Your report was difficult to read because of the trappings of scholarship—formal tone, long sentences and paragraphs, etc.) An index would have helped.
* Your classification scheme is not exhaustive, and thus less useful.
* This seems particularly important when considering intelligent adversaries.
* I get the impression that you are not deeply familiar with the topic you are talking about. For example, when giving your overview, you don't consider \[redacted\], which is really _the_ company working on this space.
* In particular, I expect that the funders or decision-makers (for instance, Open Philanthropy) whom you might be attempting to influence or inform will be more familiar with the topic than you, and would thus not outsource their intellectual labor to your report.
* I don't really know whether you are characterizing the literature faithfully, whether you're just citing the top few most salient experts that you found, or whether there are other factors at play. For instance, maybe the people who \[redacted\] don't want to be talking about it. Even if you are representing the academic consensus fairly, I don't know how much to trust it. Like, I get that it's an academic field, but I don't particularly expect it to have good models of the world.
* It's unclear who is the "we" who will implement the measures you propose in the text.
* This might be an acceptable simplification to make at the beginning, but in the end I don't think it's that useful to talk about what "humanity as a whole" should do without a specific plan of action.
* Because of the above, your conclusions seem fragile / untrustworthy.
## Suggestions if you want to produce something which is directly useful
* Bite off a smaller chunk. E.g., what does the literature say on A? How could B look in 5 years? What are the current capabilities of C? What is the theoretical maximum of D? How does F tool shine light on E?
* Alternatively, do become deeply familiar with a topic you're interested in over a longer period of time, then write the more ambitious type of analysis.
* Try to get an exhaustive classification scheme, or clearly point out which assumptions you are making or not making.
* This point feels particularly important when considering adversarial agents, because vectors of attack are fungible/interchangeable.
* One method for finding exhaustive classifications is logical negation. E.g., state/non-state actors, human/non-human forecasting systems, transparent/opaque systems, systems which take/do not take decisions, etc.
* One can then consider different ends of the spectrum, e.g. more or less opaque systems, bigger or smaller non-state actors, etc.
* Outline what your method was for generating your classification scheme.
* If there is no method, point this out.
* If you are making an arbitrary decision (e.g., to only focus on United Nations' organizations rather than on all organizations), point this out. Constraining the scope of your research seems fine, but I find it very annoying when this isn't presented loud and clear.
* For example, "We are only looking at human forecasting systems because those are the ones we are most familiar with. However, note that machine learning systems or data analysis pipelines are usually more powerful methods if one has enough data."
* Go through past [OpenPhilanthropy](https://www.openphilanthropy.org/search/ss360/Report%20on) and [Rethink Priorities](https://forum.effectivealtruism.org/tag/rethink-priorities) reports to get a sense of how ambitious reports which are comprehensive enough to influence decisions look.
* Go through the following examples to get a sense of useful projects which are constrained and done by relatively non-senior researchers, yet still seem useful [Parameter counts in Machine Learning](https://www.alignmentforum.org/posts/GzoWcYibWYwJva8aL/parameter-counts-in-machine-learning), [Database of existential risk estimates](https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates), [Base Rates on United States Regime Collapse](https://forum.effectivealtruism.org/posts/ptDZj4TRrtoxzoptu/?commentId=242vuFZzE7foWdZRD), [Analgesics for farm animals](https://forum.effectivealtruism.org/posts/LLLFTXiHMEd4MctQ5/analgesics-for-farm-animals).
* Signpost how much time you've spent, and how confident you are in your conclusion.
* Depending on the situation: Hire an editor, or at least go through [hemingwayapp](https://hemingwayapp.com/) to make your writing clearer (h/t Marta Krzeminska).
But note that maybe producing something directly useful isn't what you what to be doing, and gaining expertise about some topic of interest might be a fine thing to do. In that case, maybe just gain expertise about X directly and then write some thoughts on it, rather than attempting to produce an exhaustive report on X from the get go.
## Object level comments
In this section, I usually go section by section pointing out impressions, things one could add, objections, etc.
## Models about the value of your project
Ultimately, I expect that the chance that your report influences e.g., funding decisions this time is pretty low, and that most of its value would come from the things you've learnt allowing you to choose a more constrained project next time, or improving your models of the world more generally.
Edited to add: I think that this is essentially a common state of affairs, and that affecting the world through research requires hitting a fairly narrow target. Ideally you could rely on mentors and on feedback from the EA community to aim you in the right direction. But in practice you do end up needing to figure out yourself a lot of the specifics. I hope that the above points were helpful for that, and good luck.
## Notes on that feedback
So, I realize that the above feedback might come across as discouraging, but at the point where someone has e.g., written quite a lengthy piece which probably won't affect any decisions, I do feel bound to give them an honest assessment of why I think that is when they ask me for feedback. That said, I am aware that I could probably word things more tactfully.
However, I'm not too worried because feedback recipients generally signaled that they found the feedback as valuable, or even "among the most useful \[they\] gathered." And in general, the EA community does go to great lengths to be welcoming to new members, so some contrast occasionally doesn't feel like a terrible idea.
I'd be curious to get push-back on any of the points, or to get other people's version of this post.