72 lines
1.6 KiB
Markdown
72 lines
1.6 KiB
Markdown
# Artificial Intelligence texts I've read
|
|
|
|
In no particular order. Recuerda que probablemente tengas notas más extensas por algún sitio.
|
|
|
|
La mayoría extraídos de https://aisafety.com/reading-group/ o https://agentfoundations.org/
|
|
|
|
|
|
|
|
*Cognitive Biases Potentially Affecting Judgement of Global Risks*, Eliezer Yudkowsky
|
|
|
|
|
|
|
|
*An AI Race for Strategic Advantage: Rhetoric and Risks*, Sean S.
|
|
|
|
*Artificial Intelligence and Its Implications for Future Suffering*, Brian Tomasik
|
|
|
|
This guy is a negative utilitarian.
|
|
|
|
Is MIRI's work too theoretical? In maths, you can take the supremum of an uncountably infinite set, which you can't do in practice.
|
|
|
|
If you have an uncountably infinite number of events, only up to a countably infinite number of them can have nonzero probabilities.
|
|
|
|
|
|
|
|
*Impossibility of deducing preferences and rationality from human policy*, Stuart Amstrong
|
|
|
|
Too theoretical. In particular, why should I care about maximum regret?
|
|
|
|
|
|
|
|
*Refuting Bostrom's Superintelligence Argument*, Sebastian Benthal
|
|
|
|
Improving a Bayesian prediction function may have too high a recalcitrance
|
|
|
|
|N: I don't really agree. Being able to discriminate between +-0.1db of evidence is probably already a superpower
|
|
|
|
|
|
|
|
*There is no fire alarm for AI*, Yudkowsky
|
|
|
|
|
|
|
|
*Superintelligence*, Nick Bostrom
|
|
|
|
|
|
|
|
*Intelligence Explosion FAQ*, Luke Muehlhauser
|
|
|
|
|
|
|
|
*Intelligence Explosion Macroeconomics*. Yudkowsky
|
|
|
|
|
|
|
|
*Strategic Implications of Openness in AI development*, Nick Bostrom
|
|
|
|
|
|
|
|
*That Alien Message*, Eliezer Yudkowsky
|
|
|
|
|
|
|
|
*The Ethics of Artificial Intelligence*, Bostrom and Yudkowsky
|
|
|
|
|
|
|
|
*Problem Class Dominance in Predictive Dilemmas*, Daniel Hintze
|
|
|
|
|
|
|
|
*Timeless Decision Theory*, Yudkowsky
|