Create ai.md

This commit is contained in:
Nuño Sempere 2018-08-08 17:50:58 +02:00 committed by GitHub
parent 7f4ea715f8
commit 2c5859651a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

71
maths-prog/ai.md Normal file
View File

@ -0,0 +1,71 @@
# Artificial Intelligence texts I've read
In no particular order. Recuerda que probablemente tengas notas más extensas por algún sitio.
La mayoría extraídos de https://aisafety.com/reading-group/ o https://agentfoundations.org/
*Cognitive Biases Potentially Affecting Judgement of Global Risks*, Eliezer Yudkowsky
*An AI Race for Strategic Advantage: Rhetoric and Risks*, Sean S.
*Artificial Intelligence and Its Implications for Future Suffering*, Brian Tomasik
This guy is a negative utilitarian.
Is MIRI's work too theoretical? In maths, you can take the supremum of an uncountably infinite set, which you can't do in practice.
If you have an uncountably infinite number of events, only up to a countably infinite number of them can have nonzero probabilities.
*Impossibility of deducing preferences and rationality from human policy*, Stuart Amstrong
Too theoretical. In particular, why should I care about maximum regret?
*Refuting Bostrom's Superintelligence Argument*, Sebastian Benthal
Improving a Bayesian prediction function may have too high a recalcitrance
|N: I don't really agree. Being able to discriminate between +-0.1db of evidence is probably already a superpower
*There is no fire alarm for AI*, Yudkowsky
*Superintelligence*, Nick Bostrom
*Intelligence Explosion FAQ*, Luke Muehlhauser
*Intelligence Explosion Macroeconomics*. Yudkowsky
*Strategic Implications of Openness in AI development*, Nick Bostrom
*That Alien Message*, Eliezer Yudkowsky
*The Ethics of Artificial Intelligence*, Bostrom and Yudkowsky
*Problem Class Dominance in Predictive Dilemmas*, Daniel Hintze
*Timeless Decision Theory*, Yudkowsky