Deleted Christiano estimate as it wasn't a probability

This commit is contained in:
NunoSempere 2021-03-16 12:40:54 +01:00
parent a1e4275272
commit 176ed1b65e
4 changed files with 0 additions and 54 deletions

View File

@ -61365,27 +61365,6 @@
"stars": 2, "stars": 2,
"optionsstringforsearch": "Yes, No" "optionsstringforsearch": "Yes, No"
}, },
{
"title": "Amount by which risk of failure to align AI (using only a narrow conception of alignment) reduces the expected value of the future",
"url": "https://aiimpacts.org/conversation-with-paul-christiano/",
"platform": "X-risk estimates",
"author": "Paul Christiano (~2019)",
"description": "Actual estimate: ~10%\n\nHe also says \"I made up 10%, its kind of a random number.\" And \"All of the numbers Im going to give are very made up though. If you asked me a second time youll get all different numbers.",
"options": [
{
"name": "Yes",
"probability": 0.01,
"type": "PROBABILITY"
},
{
"name": "No",
"probability": 0.99,
"type": "PROBABILITY"
}
],
"stars": 2,
"optionsstringforsearch": "Yes, No"
},
{ {
"title": "Existential catastrophe happening this century (maybe just from AI?)", "title": "Existential catastrophe happening this century (maybe just from AI?)",
"url": "https://youtu.be/aFAI8itZCGk?t=854", "url": "https://youtu.be/aFAI8itZCGk?t=854",

View File

@ -223,16 +223,6 @@
"category": "AI", "category": "AI",
"description": "Stated verbally during an interview. Not totally clear precisely what was being estimated (e.g. just extinction, or existential catastrophe more broadly?). He noted \"This number fluctuates a lot\". He indicated he thought we had a 2/3 chance of surviving, then said he'd adjust to 50%, which is his number for an \"actually superintelligent\" AI, whereas for \"AI in general\" it'd be 60%. This is notably higher than his 2020 estimate, implying either that he updated towards somewhat more \"optimism\" between 2014 and 2020, or that one or both of these estimates don't reflect stable beliefs." "description": "Stated verbally during an interview. Not totally clear precisely what was being estimated (e.g. just extinction, or existential catastrophe more broadly?). He noted \"This number fluctuates a lot\". He indicated he thought we had a 2/3 chance of surviving, then said he'd adjust to 50%, which is his number for an \"actually superintelligent\" AI, whereas for \"AI in general\" it'd be 60%. This is notably higher than his 2020 estimate, implying either that he updated towards somewhat more \"optimism\" between 2014 and 2020, or that one or both of these estimates don't reflect stable beliefs."
}, },
{
"title": "Amount by which risk of failure to align AI (using only a narrow conception of alignment) reduces the expected value of the future",
"url": "https://aiimpacts.org/conversation-with-paul-christiano/",
"probability": 0.01,
"actualEstimate": "~10%",
"platform": "Paul Christiano",
"date_approx": 2019,
"category": "AI",
"description": "He also says \"I made up 10%, its kind of a random number.\" And \"All of the numbers Im going to give are very made up though. If you asked me a second time youll get all different numbers."
},
{ {
"title": "Existential catastrophe happening this century (maybe just from AI?)", "title": "Existential catastrophe happening this century (maybe just from AI?)",
"url": "https://youtu.be/aFAI8itZCGk?t=854", "url": "https://youtu.be/aFAI8itZCGk?t=854",

View File

@ -66,9 +66,6 @@ I put the probability that [AI/AGI] is an existential risk roughly in the 30% to
"Chance of humanity not surviving AI","https://www.youtube.com/watch?v=i4LjoJGpqIY& (from 39:40)","X-risk estimates","Actual estimate: 50, 40, or 33% "Chance of humanity not surviving AI","https://www.youtube.com/watch?v=i4LjoJGpqIY& (from 39:40)","X-risk estimates","Actual estimate: 50, 40, or 33%
Stated verbally during an interview. Not totally clear precisely what was being estimated (e.g. just extinction, or existential catastrophe more broadly?). He noted ""This number fluctuates a lot"". He indicated he thought we had a 2/3 chance of surviving, then said he'd adjust to 50%, which is his number for an ""actually superintelligent"" AI, whereas for ""AI in general"" it'd be 60%. This is notably higher than his 2020 estimate, implying either that he updated towards somewhat more ""optimism"" between 2014 and 2020, or that one or both of these estimates don't reflect stable beliefs.","[{""name"":""Yes"",""probability"":0.4,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.6,""type"":""PROBABILITY""}]",,,2 Stated verbally during an interview. Not totally clear precisely what was being estimated (e.g. just extinction, or existential catastrophe more broadly?). He noted ""This number fluctuates a lot"". He indicated he thought we had a 2/3 chance of surviving, then said he'd adjust to 50%, which is his number for an ""actually superintelligent"" AI, whereas for ""AI in general"" it'd be 60%. This is notably higher than his 2020 estimate, implying either that he updated towards somewhat more ""optimism"" between 2014 and 2020, or that one or both of these estimates don't reflect stable beliefs.","[{""name"":""Yes"",""probability"":0.4,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.6,""type"":""PROBABILITY""}]",,,2
"Amount by which risk of failure to align AI (using only a narrow conception of alignment) reduces the expected value of the future","https://aiimpacts.org/conversation-with-paul-christiano/","X-risk estimates","Actual estimate: ~10%
He also says ""I made up 10%, its kind of a random number."" And ""All of the numbers Im going to give are very made up though. If you asked me a second time youll get all different numbers.","[{""name"":""Yes"",""probability"":0.01,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.99,""type"":""PROBABILITY""}]",,,2
"Existential catastrophe happening this century (maybe just from AI?)","https://youtu.be/aFAI8itZCGk?t=854","X-risk estimates","Actual estimate: 33-50% "Existential catastrophe happening this century (maybe just from AI?)","https://youtu.be/aFAI8itZCGk?t=854","X-risk estimates","Actual estimate: 33-50%
This comes from a verbal interview (from the 14:14 mark). The interview was focused on AI, and this estimate may have been as well. Tallinn said he's not very confident, but is fairly confident his estimate would be in double-digits, and then said ""two obvious Schelling points"" are 33% or 50%, so he'd guess somewhere in between those. Other comments during the interview seem to imply Tallinn is either just talking about extinction risk or thinks existential risk happens to be dominated by extinction risk.","[{""name"":""Yes"",""probability"":0.415,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.585,""type"":""PROBABILITY""}]",,,2 This comes from a verbal interview (from the 14:14 mark). The interview was focused on AI, and this estimate may have been as well. Tallinn said he's not very confident, but is fairly confident his estimate would be in double-digits, and then said ""two obvious Schelling points"" are 33% or 50%, so he'd guess somewhere in between those. Other comments during the interview seem to imply Tallinn is either just talking about extinction risk or thinks existential risk happens to be dominated by extinction risk.","[{""name"":""Yes"",""probability"":0.415,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.585,""type"":""PROBABILITY""}]",,,2

1 title url platform description options numforecasts numforecasters stars
66 A government will build the first human-level AGI, assuming humans build one at all Human-controlled AGI in expectation would result in less suffering than uncontrolled http://www.stafforini.com/blog/what_i_believe/ https://reducing-suffering.org/summary-beliefs-values-big-questions/ X-risk estimates [{"name":"Yes","probability":0.6,"type":"PROBABILITY"},{"name":"No","probability":0.4,"type":"PROBABILITY"}] [{"name":"Yes","probability":0.52,"type":"PROBABILITY"},{"name":"No","probability":0.48,"type":"PROBABILITY"}] 2
67 Human-controlled AGI in expectation would result in less suffering than uncontrolled A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) https://reducing-suffering.org/summary-beliefs-values-big-questions/ X-risk estimates Actual estimate: 0.5% [{"name":"Yes","probability":0.52,"type":"PROBABILITY"},{"name":"No","probability":0.48,"type":"PROBABILITY"}] [{"name":"Yes","probability":0.005,"type":"PROBABILITY"},{"name":"No","probability":0.995,"type":"PROBABILITY"}] 2
68 A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) https://reducing-suffering.org/summary-beliefs-values-big-questions/ http://www.stafforini.com/blog/what_i_believe/ X-risk estimates Actual estimate: 0.5% [{"name":"Yes","probability":0.005,"type":"PROBABILITY"},{"name":"No","probability":0.995,"type":"PROBABILITY"}] [{"name":"Yes","probability":0.1,"type":"PROBABILITY"},{"name":"No","probability":0.9,"type":"PROBABILITY"}] 2
A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) http://www.stafforini.com/blog/what_i_believe/ X-risk estimates [{"name":"Yes","probability":0.1,"type":"PROBABILITY"},{"name":"No","probability":0.9,"type":"PROBABILITY"}] 2
At least 1 million dead as a result of the single biggest engineered pandemic before 2100 https://www.fhi.ox.ac.uk/reports/2008-1.pdf X-risk estimates This is the median. The report about these estimates also plots the results for each question “with individual response distributions visible” in Appendix A. [{"name":"Yes","probability":0.3,"type":"PROBABILITY"},{"name":"No","probability":0.7,"type":"PROBABILITY"}] 2
At least 1 billion dead as a result of the single biggest engineered pandemic before 2100 https://www.fhi.ox.ac.uk/reports/2008-1.pdf X-risk estimates This is the median. The report about these estimates also plots the results for each question “with individual response distributions visible” in Appendix A. [{"name":"Yes","probability":0.1,"type":"PROBABILITY"},{"name":"No","probability":0.9,"type":"PROBABILITY"}] 2
69 At least 1 million dead as a result of the single biggest natural pandemic before 2100 At least 1 million dead as a result of the single biggest engineered pandemic before 2100 https://www.fhi.ox.ac.uk/reports/2008-1.pdf X-risk estimates This is the median. The report about these estimates also plots the results for each question “with individual response distributions visible” in Appendix A. [{"name":"Yes","probability":0.6,"type":"PROBABILITY"},{"name":"No","probability":0.4,"type":"PROBABILITY"}] [{"name":"Yes","probability":0.3,"type":"PROBABILITY"},{"name":"No","probability":0.7,"type":"PROBABILITY"}] 2
70 At least 1 billion dead as a result of the single biggest natural pandemic before 2100 At least 1 billion dead as a result of the single biggest engineered pandemic before 2100 https://www.fhi.ox.ac.uk/reports/2008-1.pdf X-risk estimates This is the median. The report about these estimates also plots the results for each question “with individual response distributions visible” in Appendix A. [{"name":"Yes","probability":0.05,"type":"PROBABILITY"},{"name":"No","probability":0.95,"type":"PROBABILITY"}] [{"name":"Yes","probability":0.1,"type":"PROBABILITY"},{"name":"No","probability":0.9,"type":"PROBABILITY"}] 2
71 At least 1 million dead as a result of molecular nanotech weapons before 2100 At least 1 million dead as a result of the single biggest natural pandemic before 2100 https://www.fhi.ox.ac.uk/reports/2008-1.pdf X-risk estimates This is the median. The report about these estimates also plots the results for each question “with individual response distributions visible” in Appendix A. [{"name":"Yes","probability":0.25,"type":"PROBABILITY"},{"name":"No","probability":0.75,"type":"PROBABILITY"}] [{"name":"Yes","probability":0.6,"type":"PROBABILITY"},{"name":"No","probability":0.4,"type":"PROBABILITY"}] 2

View File

@ -459,26 +459,6 @@
], ],
"stars": 2 "stars": 2
}, },
{
"title": "Amount by which risk of failure to align AI (using only a narrow conception of alignment) reduces the expected value of the future",
"url": "https://aiimpacts.org/conversation-with-paul-christiano/",
"platform": "X-risk estimates",
"author": "Paul Christiano (~2019)",
"description": "Actual estimate: ~10%\n\nHe also says \"I made up 10%, its kind of a random number.\" And \"All of the numbers Im going to give are very made up though. If you asked me a second time youll get all different numbers.",
"options": [
{
"name": "Yes",
"probability": 0.01,
"type": "PROBABILITY"
},
{
"name": "No",
"probability": 0.99,
"type": "PROBABILITY"
}
],
"stars": 2
},
{ {
"title": "Existential catastrophe happening this century (maybe just from AI?)", "title": "Existential catastrophe happening this century (maybe just from AI?)",
"url": "https://youtu.be/aFAI8itZCGk?t=854", "url": "https://youtu.be/aFAI8itZCGk?t=854",