Fixed typos
This commit is contained in:
parent
8120cceaf7
commit
a1e4275272
|
@ -61012,8 +61012,8 @@
|
|||
"title": "There be zero living humans on planet earth on January 1, 2100",
|
||||
"url": "https://www.metaculus.com/questions/578/human-extinction-by-2100/",
|
||||
"platform": "X-risk estimates",
|
||||
"author": "Metaculus responders (~)",
|
||||
"description": "Actual estimate: Median: 1%. Mean: 8%.\n\nThat median and mean is as of 3rd July 2019.",
|
||||
"author": "Metaculus responders (~2021)",
|
||||
"description": "Actual estimate: Median: 1%. Mean: 7%.\n\nWhile the general feeling of most people, especially now that the cold war is (mostly) over, is that the risk of human extinction is extremely small, experts have assigned a significantly higher probability to the event.\n\nIn 2008 an informal poll at the Global Catastrophic Risk Conference at the University of Oxford yielded a median probability of human extinction by 2100 of 19%. Yet, one might want to be cautious when using this result as a good estimate of the true probability of human extinction, as there may be a powerful selection effect at play. Only those who assign a high probability to human extinction are likely to attend the Global Catastrophic Risk Conference in the first place, meaning that the survey was effectively sampling opinions from one extreme tail of the opinion distribution on the subject. Indeed, the conference report itself stated that the findings should be taken 'with a grain of salt'..\n\nTherefore, it is asked: will there be zero living humans on planet earth on January 1, 2100?.\n\nFor these purposes we'll define humans as biological creatures who have as their ancestors – via a chain of live births from mothers – circa 2000 humans OR who could mate with circa 2000 humans to produce viable offspring. (So AIs, ems, genetically engineered beings of a different species brought up in artificial wombs, etc. would not count.).\n\nN.B. Even though it is obviously the case that if human extinction occurs Metaculus points won't be very valuable anymore and that it will be practically impossible to check for true human extinction (zero humans left), I would like to ask people not to let this fact influence their prediction and to predict in good faith.",
|
||||
"options": [
|
||||
{
|
||||
"name": "Yes",
|
||||
|
@ -61198,7 +61198,7 @@
|
|||
"optionsstringforsearch": "Yes, No"
|
||||
},
|
||||
{
|
||||
"title": "Extremely bad (e.g. extinction)” long-run impact on humanity from “high-level machine intelligence",
|
||||
"title": "Extremely bad (e.g. extinction) long-run impact on humanity from “high-level machine intelligence",
|
||||
"url": "https://arxiv.org/abs/1705.08807",
|
||||
"platform": "X-risk estimates",
|
||||
"author": "Survey of AI experts (~2017)",
|
||||
|
@ -61395,12 +61395,12 @@
|
|||
"options": [
|
||||
{
|
||||
"name": "Yes",
|
||||
"probability": 41.5,
|
||||
"probability": 0.415,
|
||||
"type": "PROBABILITY"
|
||||
},
|
||||
{
|
||||
"name": "No",
|
||||
"probability": -40.5,
|
||||
"probability": 0.585,
|
||||
"type": "PROBABILITY"
|
||||
}
|
||||
],
|
||||
|
@ -61685,7 +61685,7 @@
|
|||
"url": "https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/#estimates-for-specific-x-risks-000810",
|
||||
"platform": "X-risk estimates",
|
||||
"author": "Toby Ord (~2020)",
|
||||
"description": "Actual estimate: ~33% (\"about one in three\")\n\nOrd: \"\"one in six is my best guess as to the chance [an existential catastrophe] happens [by 2120]. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?\n\nMy best guess for that is actually about one in three this century. If we carry on mostly ignoring these risks with humanity’s escalating power during the century and some of these threats being very serious. But I think that there’s a good chance that we will rise to these challenges and do something about them. So you could think of my overall estimate as being something like Russian roulette, but my initial business as usual estimate being there’s something like two bullets in the chamber of the gun, but then we’ll probably remove one and that if we really got our act together, we could basically remove both of them. And so, in some sense, maybe the headline figure should be one in three being the difference between the business as usual risk and how much of that we could eliminate if we really got our act together.\"\"\n\nArden Koehler replies \"\"Okay. So business as usual means doing what we are approximately doing now extrapolated into the future but we don’t put much more effort into it as opposed to doing nothing at all?\"\"\n\nOrd replies: \"\"That’s right, and it turns out to be quite hard to define business as usual. That’s the reason why, for my key estimate, that I make it… In some sense, it’s difficult to define estimates where they take into account whether or not people follow the advice that you’re giving; that introduces its own challenges. But at least that’s just what a probability normally means. It means that your best guess of the chance something happens, whereas a best guess that something happens conditional upon certain trends either staying at the same level or continuing on the same trajectory or something is just quite a bit more unclear as to what you’re even talking about.\"\"",
|
||||
"description": "Actual estimate: ~33% (\"about one in three\")\n\nOrd: \"one in six is my best guess as to the chance [an existential catastrophe] happens [by 2120]. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?\n\nMy best guess for that is actually about one in three this century. If we carry on mostly ignoring these risks with humanity’s escalating power during the century and some of these threats being very serious. But I think that there’s a good chance that we will rise to these challenges and do something about them. So you could think of my overall estimate as being something like Russian roulette, but my initial business as usual estimate being there’s something like two bullets in the chamber of the gun, but then we’ll probably remove one and that if we really got our act together, we could basically remove both of them. And so, in some sense, maybe the headline figure should be one in three being the difference between the business as usual risk and how much of that we could eliminate if we really got our act together.\"\"\n\nArden Koehler replies \"\"Okay. So business as usual means doing what we are approximately doing now extrapolated into the future but we don’t put much more effort into it as opposed to doing nothing at all?\"\"\n\nOrd replies: \"\"That’s right, and it turns out to be quite hard to define business as usual. That’s the reason why, for my key estimate, that I make it… In some sense, it’s difficult to define estimates where they take into account whether or not people follow the advice that you’re giving; that introduces its own challenges. But at least that’s just what a probability normally means. It means that your best guess of the chance something happens, whereas a best guess that something happens conditional upon certain trends either staying at the same level or continuing on the same trajectory or something is just quite a bit more unclear as to what you’re even talking about.\"\"",
|
||||
"options": [
|
||||
{
|
||||
"name": "Yes",
|
||||
|
@ -61702,7 +61702,7 @@
|
|||
"optionsstringforsearch": "Yes, No"
|
||||
},
|
||||
{
|
||||
"title": "The probability that the long-run overall impact on humanity of human level machine intelligence will be Extremely bad (existential catastrophe)”, assuming Human Level Machine Intelligence will at some point exist.",
|
||||
"title": "The probability that the long-run overall impact on humanity of human level machine intelligence will be Extremely bad (existential catastrophe), assuming Human Level Machine Intelligence will at some point exist.",
|
||||
"url": "https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=511918904",
|
||||
"platform": "X-risk estimates",
|
||||
"author": "Survey of experts in the AI field (~2016)",
|
||||
|
|
|
@ -61,11 +61,11 @@
|
|||
"title": "There be zero living humans on planet earth on January 1, 2100",
|
||||
"url": "https://www.metaculus.com/questions/578/human-extinction-by-2100/",
|
||||
"probability": 0.08,
|
||||
"actualEstimate": "Median: 1%. Mean: 8%.",
|
||||
"actualEstimate": "Median: 1%. Mean: 7%.",
|
||||
"platform": "Metaculus responders",
|
||||
"date_approx": "",
|
||||
"date_approx": "2021",
|
||||
"category": "Total risk",
|
||||
"description": "That median and mean is as of 3rd July 2019."
|
||||
"description": "While the general feeling of most people, especially now that the cold war is (mostly) over, is that the risk of human extinction is extremely small, experts have assigned a significantly higher probability to the event.\n\nIn 2008 an informal poll at the Global Catastrophic Risk Conference at the University of Oxford yielded a median probability of human extinction by 2100 of 19%. Yet, one might want to be cautious when using this result as a good estimate of the true probability of human extinction, as there may be a powerful selection effect at play. Only those who assign a high probability to human extinction are likely to attend the Global Catastrophic Risk Conference in the first place, meaning that the survey was effectively sampling opinions from one extreme tail of the opinion distribution on the subject. Indeed, the conference report itself stated that the findings should be taken 'with a grain of salt'..\n\nTherefore, it is asked: will there be zero living humans on planet earth on January 1, 2100?.\n\nFor these purposes we'll define humans as biological creatures who have as their ancestors – via a chain of live births from mothers – circa 2000 humans OR who could mate with circa 2000 humans to produce viable offspring. (So AIs, ems, genetically engineered beings of a different species brought up in artificial wombs, etc. would not count.).\n\nN.B. Even though it is obviously the case that if human extinction occurs Metaculus points won't be very valuable anymore and that it will be practically impossible to check for true human extinction (zero humans left), I would like to ask people not to let this fact influence their prediction and to predict in good faith."
|
||||
},
|
||||
{
|
||||
"title": "Existential disaster will do us in",
|
||||
|
@ -146,7 +146,7 @@
|
|||
"description": "This is the median. Beard et al.'s appendix says \"Note that for these predictions no time frame was given.\" I think that that's incorrect, based on phrasings in the original source, but I'm not certain."
|
||||
},
|
||||
{
|
||||
"title": "Extremely bad (e.g. extinction)” long-run impact on humanity from “high-level machine intelligence",
|
||||
"title": "Extremely bad (e.g. extinction) long-run impact on humanity from “high-level machine intelligence",
|
||||
"url": "https://arxiv.org/abs/1705.08807",
|
||||
"probability": 0.05,
|
||||
"platform": "Survey of AI experts",
|
||||
|
@ -236,7 +236,7 @@
|
|||
{
|
||||
"title": "Existential catastrophe happening this century (maybe just from AI?)",
|
||||
"url": "https://youtu.be/aFAI8itZCGk?t=854",
|
||||
"probability": 41.5,
|
||||
"probability": 0.415,
|
||||
"actualEstimate": "33-50%",
|
||||
"platform": "Jaan Tallinn",
|
||||
"date_approx": 2020,
|
||||
|
@ -379,10 +379,10 @@
|
|||
"platform": "Toby Ord",
|
||||
"date_approx": 2020,
|
||||
"category": "Total risk/conditional",
|
||||
"description": "Ord: \"\"one in six is my best guess as to the chance [an existential catastrophe] happens [by 2120]. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?\n\nMy best guess for that is actually about one in three this century. If we carry on mostly ignoring these risks with humanity’s escalating power during the century and some of these threats being very serious. But I think that there’s a good chance that we will rise to these challenges and do something about them. So you could think of my overall estimate as being something like Russian roulette, but my initial business as usual estimate being there’s something like two bullets in the chamber of the gun, but then we’ll probably remove one and that if we really got our act together, we could basically remove both of them. And so, in some sense, maybe the headline figure should be one in three being the difference between the business as usual risk and how much of that we could eliminate if we really got our act together.\"\"\n\nArden Koehler replies \"\"Okay. So business as usual means doing what we are approximately doing now extrapolated into the future but we don’t put much more effort into it as opposed to doing nothing at all?\"\"\n\nOrd replies: \"\"That’s right, and it turns out to be quite hard to define business as usual. That’s the reason why, for my key estimate, that I make it… In some sense, it’s difficult to define estimates where they take into account whether or not people follow the advice that you’re giving; that introduces its own challenges. But at least that’s just what a probability normally means. It means that your best guess of the chance something happens, whereas a best guess that something happens conditional upon certain trends either staying at the same level or continuing on the same trajectory or something is just quite a bit more unclear as to what you’re even talking about.\"\""
|
||||
"description": "Ord: \"one in six is my best guess as to the chance [an existential catastrophe] happens [by 2120]. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?\n\nMy best guess for that is actually about one in three this century. If we carry on mostly ignoring these risks with humanity’s escalating power during the century and some of these threats being very serious. But I think that there’s a good chance that we will rise to these challenges and do something about them. So you could think of my overall estimate as being something like Russian roulette, but my initial business as usual estimate being there’s something like two bullets in the chamber of the gun, but then we’ll probably remove one and that if we really got our act together, we could basically remove both of them. And so, in some sense, maybe the headline figure should be one in three being the difference between the business as usual risk and how much of that we could eliminate if we really got our act together.\"\"\n\nArden Koehler replies \"\"Okay. So business as usual means doing what we are approximately doing now extrapolated into the future but we don’t put much more effort into it as opposed to doing nothing at all?\"\"\n\nOrd replies: \"\"That’s right, and it turns out to be quite hard to define business as usual. That’s the reason why, for my key estimate, that I make it… In some sense, it’s difficult to define estimates where they take into account whether or not people follow the advice that you’re giving; that introduces its own challenges. But at least that’s just what a probability normally means. It means that your best guess of the chance something happens, whereas a best guess that something happens conditional upon certain trends either staying at the same level or continuing on the same trajectory or something is just quite a bit more unclear as to what you’re even talking about.\"\""
|
||||
},
|
||||
{
|
||||
"title": "The probability that the long-run overall impact on humanity of human level machine intelligence will be Extremely bad (existential catastrophe)”, assuming Human Level Machine Intelligence will at some point exist.",
|
||||
"title": "The probability that the long-run overall impact on humanity of human level machine intelligence will be Extremely bad (existential catastrophe), assuming Human Level Machine Intelligence will at some point exist.",
|
||||
"url": "https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=511918904",
|
||||
"probability": 0.18,
|
||||
"platform": "Survey of experts in the AI field",
|
||||
|
|
|
@ -15,9 +15,17 @@ The probability of the human race avoiding extinction for the next five centurie
|
|||
"Our present civilization on earth will survive to the end of the present century","https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=0","X-risk estimates","Actual estimate: ≤50% (""no better than fifty-fifty"")
|
||||
|
||||
","[{""name"":""Yes"",""probability"":0.5,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.5,""type"":""PROBABILITY""}]",,,2
|
||||
"There be zero living humans on planet earth on January 1, 2100","https://www.metaculus.com/questions/578/human-extinction-by-2100/","X-risk estimates","Actual estimate: Median: 1%. Mean: 8%.
|
||||
"There be zero living humans on planet earth on January 1, 2100","https://www.metaculus.com/questions/578/human-extinction-by-2100/","X-risk estimates","Actual estimate: Median: 1%. Mean: 7%.
|
||||
|
||||
That median and mean is as of 3rd July 2019.","[{""name"":""Yes"",""probability"":0.08,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.92,""type"":""PROBABILITY""}]",,,2
|
||||
While the general feeling of most people, especially now that the cold war is (mostly) over, is that the risk of human extinction is extremely small, experts have assigned a significantly higher probability to the event.
|
||||
|
||||
In 2008 an informal poll at the Global Catastrophic Risk Conference at the University of Oxford yielded a median probability of human extinction by 2100 of 19%. Yet, one might want to be cautious when using this result as a good estimate of the true probability of human extinction, as there may be a powerful selection effect at play. Only those who assign a high probability to human extinction are likely to attend the Global Catastrophic Risk Conference in the first place, meaning that the survey was effectively sampling opinions from one extreme tail of the opinion distribution on the subject. Indeed, the conference report itself stated that the findings should be taken 'with a grain of salt'..
|
||||
|
||||
Therefore, it is asked: will there be zero living humans on planet earth on January 1, 2100?.
|
||||
|
||||
For these purposes we'll define humans as biological creatures who have as their ancestors – via a chain of live births from mothers – circa 2000 humans OR who could mate with circa 2000 humans to produce viable offspring. (So AIs, ems, genetically engineered beings of a different species brought up in artificial wombs, etc. would not count.).
|
||||
|
||||
N.B. Even though it is obviously the case that if human extinction occurs Metaculus points won't be very valuable anymore and that it will be practically impossible to check for true human extinction (zero humans left), I would like to ask people not to let this fact influence their prediction and to predict in good faith.","[{""name"":""Yes"",""probability"":0.08,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.92,""type"":""PROBABILITY""}]",,,2
|
||||
"Existential disaster will do us in","https://www.nickbostrom.com/existential/risks.html","X-risk estimates","Actual estimate: Probably at or above 25%
|
||||
|
||||
","[{""name"":""Yes"",""probability"":0.25,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.75,""type"":""PROBABILITY""}]",,,2
|
||||
|
@ -38,7 +46,7 @@ I think it's fairly likely(>20%) that sentient life will survive for at least bi
|
|||
|
||||
","[{""name"":""Yes"",""probability"":0.1,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.9,""type"":""PROBABILITY""}]",,,2
|
||||
"Human extinction by 2100 as a result of superintelligent AI","https://www.fhi.ox.ac.uk/reports/2008-1.pdf","X-risk estimates","This is the median. Beard et al.'s appendix says ""Note that for these predictions no time frame was given."" I think that that's incorrect, based on phrasings in the original source, but I'm not certain.","[{""name"":""Yes"",""probability"":0.05,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.95,""type"":""PROBABILITY""}]",,,2
|
||||
"Extremely bad (e.g. extinction)” long-run impact on humanity from “high-level machine intelligence","https://arxiv.org/abs/1705.08807","X-risk estimates","The report's authors discuss potential concerns around non-response bias and the fact that “NIPS and ICML authors are representative of machine learning but not of the field of artificial intelligence as a whole”. There was also evidence of apparent inconsistencies in estimates of AI timelines as a result of small changes to how questions were asked, providing further reason to wonder how meaningful these experts’ predictions were. https://web.archive.org/web/20171030220008/https://aiimpacts.org/some-survey-results/","[{""name"":""Yes"",""probability"":0.05,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.95,""type"":""PROBABILITY""}]",,,2
|
||||
"Extremely bad (e.g. extinction) long-run impact on humanity from “high-level machine intelligence","https://arxiv.org/abs/1705.08807","X-risk estimates","The report's authors discuss potential concerns around non-response bias and the fact that “NIPS and ICML authors are representative of machine learning but not of the field of artificial intelligence as a whole”. There was also evidence of apparent inconsistencies in estimates of AI timelines as a result of small changes to how questions were asked, providing further reason to wonder how meaningful these experts’ predictions were. https://web.archive.org/web/20171030220008/https://aiimpacts.org/some-survey-results/","[{""name"":""Yes"",""probability"":0.05,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.95,""type"":""PROBABILITY""}]",,,2
|
||||
"A state where civilization collapses and does not recover, or a situation where all human life ends, due to AI","https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=0","X-risk estimates","Actual estimate: 0-10%
|
||||
|
||||
","[{""name"":""Yes"",""probability"":0.05,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.95,""type"":""PROBABILITY""}]",,,2
|
||||
|
@ -63,7 +71,7 @@ Stated verbally during an interview. Not totally clear precisely what was being
|
|||
He also says ""I made up 10%, it’s kind of a random number."" And ""All of the numbers I’m going to give are very made up though. If you asked me a second time you’ll get all different numbers.","[{""name"":""Yes"",""probability"":0.01,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.99,""type"":""PROBABILITY""}]",,,2
|
||||
"Existential catastrophe happening this century (maybe just from AI?)","https://youtu.be/aFAI8itZCGk?t=854","X-risk estimates","Actual estimate: 33-50%
|
||||
|
||||
This comes from a verbal interview (from the 14:14 mark). The interview was focused on AI, and this estimate may have been as well. Tallinn said he's not very confident, but is fairly confident his estimate would be in double-digits, and then said ""two obvious Schelling points"" are 33% or 50%, so he'd guess somewhere in between those. Other comments during the interview seem to imply Tallinn is either just talking about extinction risk or thinks existential risk happens to be dominated by extinction risk.","[{""name"":""Yes"",""probability"":41.5,""type"":""PROBABILITY""},{""name"":""No"",""probability"":-40.5,""type"":""PROBABILITY""}]",,,2
|
||||
This comes from a verbal interview (from the 14:14 mark). The interview was focused on AI, and this estimate may have been as well. Tallinn said he's not very confident, but is fairly confident his estimate would be in double-digits, and then said ""two obvious Schelling points"" are 33% or 50%, so he'd guess somewhere in between those. Other comments during the interview seem to imply Tallinn is either just talking about extinction risk or thinks existential risk happens to be dominated by extinction risk.","[{""name"":""Yes"",""probability"":0.415,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.585,""type"":""PROBABILITY""}]",,,2
|
||||
"Existential catastrophe from engineered pandemics by 2120","https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=0","X-risk estimates","Actual estimate: ~3% (~1 in 30)
|
||||
|
||||
","[{""name"":""Yes"",""probability"":0.03,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.97,""type"":""PROBABILITY""}]",,,2
|
||||
|
@ -101,14 +109,14 @@ This is the median. Beard et al.'s appendix says ""Note that for these predictio
|
|||
See this post for some commentary: [Some thoughts on Toby Ord’s existential risk estimates](https://forum.effectivealtruism.org/posts/Z5KZ2cui8WDjyF6gJ/my-thoughts-on-toby-ord-s-existential-risk-estimates#_Unforeseen__and__other__anthropogenic_risks__Surprisingly_risky_)","[{""name"":""Yes"",""probability"":0.02,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.98,""type"":""PROBABILITY""}]",,,2
|
||||
"Total existential risk by 2120 if we just carry on as we are, with business as usual (which Ord doesn't expect us to do)","https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/#estimates-for-specific-x-risks-000810","X-risk estimates","Actual estimate: ~33% (""about one in three"")
|
||||
|
||||
Ord: """"one in six is my best guess as to the chance [an existential catastrophe] happens [by 2120]. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?
|
||||
Ord: ""one in six is my best guess as to the chance [an existential catastrophe] happens [by 2120]. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?
|
||||
|
||||
My best guess for that is actually about one in three this century. If we carry on mostly ignoring these risks with humanity’s escalating power during the century and some of these threats being very serious. But I think that there’s a good chance that we will rise to these challenges and do something about them. So you could think of my overall estimate as being something like Russian roulette, but my initial business as usual estimate being there’s something like two bullets in the chamber of the gun, but then we’ll probably remove one and that if we really got our act together, we could basically remove both of them. And so, in some sense, maybe the headline figure should be one in three being the difference between the business as usual risk and how much of that we could eliminate if we really got our act together.""""
|
||||
|
||||
Arden Koehler replies """"Okay. So business as usual means doing what we are approximately doing now extrapolated into the future but we don’t put much more effort into it as opposed to doing nothing at all?""""
|
||||
|
||||
Ord replies: """"That’s right, and it turns out to be quite hard to define business as usual. That’s the reason why, for my key estimate, that I make it… In some sense, it’s difficult to define estimates where they take into account whether or not people follow the advice that you’re giving; that introduces its own challenges. But at least that’s just what a probability normally means. It means that your best guess of the chance something happens, whereas a best guess that something happens conditional upon certain trends either staying at the same level or continuing on the same trajectory or something is just quite a bit more unclear as to what you’re even talking about.""""","[{""name"":""Yes"",""probability"":0.33,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.6699999999999999,""type"":""PROBABILITY""}]",,,2
|
||||
"The probability that the long-run overall impact on humanity of human level machine intelligence will be Extremely bad (existential catastrophe)”, assuming Human Level Machine Intelligence will at some point exist.","https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=511918904","X-risk estimates","This is the mean. According to Beard et al., the question was ""4. Assume for the purpose of this question that such Human Level Machine Intelligence (HLMI) will at some point exist. How positive or negative would be overall impact on humanity, in the long run?","[{""name"":""Yes"",""probability"":0.18,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.8200000000000001,""type"":""PROBABILITY""}]",,,2
|
||||
"The probability that the long-run overall impact on humanity of human level machine intelligence will be Extremely bad (existential catastrophe), assuming Human Level Machine Intelligence will at some point exist.","https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=511918904","X-risk estimates","This is the mean. According to Beard et al., the question was ""4. Assume for the purpose of this question that such Human Level Machine Intelligence (HLMI) will at some point exist. How positive or negative would be overall impact on humanity, in the long run?","[{""name"":""Yes"",""probability"":0.18,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.8200000000000001,""type"":""PROBABILITY""}]",,,2
|
||||
"Chance that AI, through “adversarial optimization against humans only”, will cause existential catastrophe, conditional on there not being “additional intervention by longtermists” (or perhaps “no intervention from longtermists”)","https://www.lesswrong.com/posts/TdwpN484eTbPSvZkm/rohin-shah-on-reasons-for-ai-optimism","X-risk estimates","Actual estimate: ~10%
|
||||
|
||||
This is my interpretation of some comments that may not have been meant to be taken very literally. I think he updated this in 2020 to ~15%, due to pessimism about discontinuous scenarios: https://www.lesswrong.com/posts/TdwpN484eTbPSvZkm/rohin-shah-on-reasons-for-ai-optimism?commentId=n577gwGB3vRpwkBmj Rohin also discusses his estimates here: https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/","[{""name"":""Yes"",""probability"":0.1,""type"":""PROBABILITY""},{""name"":""No"",""probability"":0.9,""type"":""PROBABILITY""}]",,,2
|
||||
|
|
|
|
@ -123,8 +123,8 @@
|
|||
"title": "There be zero living humans on planet earth on January 1, 2100",
|
||||
"url": "https://www.metaculus.com/questions/578/human-extinction-by-2100/",
|
||||
"platform": "X-risk estimates",
|
||||
"author": "Metaculus responders (~)",
|
||||
"description": "Actual estimate: Median: 1%. Mean: 8%.\n\nThat median and mean is as of 3rd July 2019.",
|
||||
"author": "Metaculus responders (~2021)",
|
||||
"description": "Actual estimate: Median: 1%. Mean: 7%.\n\nWhile the general feeling of most people, especially now that the cold war is (mostly) over, is that the risk of human extinction is extremely small, experts have assigned a significantly higher probability to the event.\n\nIn 2008 an informal poll at the Global Catastrophic Risk Conference at the University of Oxford yielded a median probability of human extinction by 2100 of 19%. Yet, one might want to be cautious when using this result as a good estimate of the true probability of human extinction, as there may be a powerful selection effect at play. Only those who assign a high probability to human extinction are likely to attend the Global Catastrophic Risk Conference in the first place, meaning that the survey was effectively sampling opinions from one extreme tail of the opinion distribution on the subject. Indeed, the conference report itself stated that the findings should be taken 'with a grain of salt'..\n\nTherefore, it is asked: will there be zero living humans on planet earth on January 1, 2100?.\n\nFor these purposes we'll define humans as biological creatures who have as their ancestors – via a chain of live births from mothers – circa 2000 humans OR who could mate with circa 2000 humans to produce viable offspring. (So AIs, ems, genetically engineered beings of a different species brought up in artificial wombs, etc. would not count.).\n\nN.B. Even though it is obviously the case that if human extinction occurs Metaculus points won't be very valuable anymore and that it will be practically impossible to check for true human extinction (zero humans left), I would like to ask people not to let this fact influence their prediction and to predict in good faith.",
|
||||
"options": [
|
||||
{
|
||||
"name": "Yes",
|
||||
|
@ -300,7 +300,7 @@
|
|||
"stars": 2
|
||||
},
|
||||
{
|
||||
"title": "Extremely bad (e.g. extinction)” long-run impact on humanity from “high-level machine intelligence",
|
||||
"title": "Extremely bad (e.g. extinction) long-run impact on humanity from “high-level machine intelligence",
|
||||
"url": "https://arxiv.org/abs/1705.08807",
|
||||
"platform": "X-risk estimates",
|
||||
"author": "Survey of AI experts (~2017)",
|
||||
|
@ -488,12 +488,12 @@
|
|||
"options": [
|
||||
{
|
||||
"name": "Yes",
|
||||
"probability": 41.5,
|
||||
"probability": 0.415,
|
||||
"type": "PROBABILITY"
|
||||
},
|
||||
{
|
||||
"name": "No",
|
||||
"probability": -40.5,
|
||||
"probability": 0.585,
|
||||
"type": "PROBABILITY"
|
||||
}
|
||||
],
|
||||
|
@ -764,7 +764,7 @@
|
|||
"url": "https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/#estimates-for-specific-x-risks-000810",
|
||||
"platform": "X-risk estimates",
|
||||
"author": "Toby Ord (~2020)",
|
||||
"description": "Actual estimate: ~33% (\"about one in three\")\n\nOrd: \"\"one in six is my best guess as to the chance [an existential catastrophe] happens [by 2120]. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?\n\nMy best guess for that is actually about one in three this century. If we carry on mostly ignoring these risks with humanity’s escalating power during the century and some of these threats being very serious. But I think that there’s a good chance that we will rise to these challenges and do something about them. So you could think of my overall estimate as being something like Russian roulette, but my initial business as usual estimate being there’s something like two bullets in the chamber of the gun, but then we’ll probably remove one and that if we really got our act together, we could basically remove both of them. And so, in some sense, maybe the headline figure should be one in three being the difference between the business as usual risk and how much of that we could eliminate if we really got our act together.\"\"\n\nArden Koehler replies \"\"Okay. So business as usual means doing what we are approximately doing now extrapolated into the future but we don’t put much more effort into it as opposed to doing nothing at all?\"\"\n\nOrd replies: \"\"That’s right, and it turns out to be quite hard to define business as usual. That’s the reason why, for my key estimate, that I make it… In some sense, it’s difficult to define estimates where they take into account whether or not people follow the advice that you’re giving; that introduces its own challenges. But at least that’s just what a probability normally means. It means that your best guess of the chance something happens, whereas a best guess that something happens conditional upon certain trends either staying at the same level or continuing on the same trajectory or something is just quite a bit more unclear as to what you’re even talking about.\"\"",
|
||||
"description": "Actual estimate: ~33% (\"about one in three\")\n\nOrd: \"one in six is my best guess as to the chance [an existential catastrophe] happens [by 2120]. That’s not a business as usual estimate. Whereas I think often people are assuming that estimates like this are, if we just carry on as we are, what’s the chance that something will happen?\n\nMy best guess for that is actually about one in three this century. If we carry on mostly ignoring these risks with humanity’s escalating power during the century and some of these threats being very serious. But I think that there’s a good chance that we will rise to these challenges and do something about them. So you could think of my overall estimate as being something like Russian roulette, but my initial business as usual estimate being there’s something like two bullets in the chamber of the gun, but then we’ll probably remove one and that if we really got our act together, we could basically remove both of them. And so, in some sense, maybe the headline figure should be one in three being the difference between the business as usual risk and how much of that we could eliminate if we really got our act together.\"\"\n\nArden Koehler replies \"\"Okay. So business as usual means doing what we are approximately doing now extrapolated into the future but we don’t put much more effort into it as opposed to doing nothing at all?\"\"\n\nOrd replies: \"\"That’s right, and it turns out to be quite hard to define business as usual. That’s the reason why, for my key estimate, that I make it… In some sense, it’s difficult to define estimates where they take into account whether or not people follow the advice that you’re giving; that introduces its own challenges. But at least that’s just what a probability normally means. It means that your best guess of the chance something happens, whereas a best guess that something happens conditional upon certain trends either staying at the same level or continuing on the same trajectory or something is just quite a bit more unclear as to what you’re even talking about.\"\"",
|
||||
"options": [
|
||||
{
|
||||
"name": "Yes",
|
||||
|
@ -780,7 +780,7 @@
|
|||
"stars": 2
|
||||
},
|
||||
{
|
||||
"title": "The probability that the long-run overall impact on humanity of human level machine intelligence will be Extremely bad (existential catastrophe)”, assuming Human Level Machine Intelligence will at some point exist.",
|
||||
"title": "The probability that the long-run overall impact on humanity of human level machine intelligence will be Extremely bad (existential catastrophe), assuming Human Level Machine Intelligence will at some point exist.",
|
||||
"url": "https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=511918904",
|
||||
"platform": "X-risk estimates",
|
||||
"author": "Survey of experts in the AI field (~2016)",
|
||||
|
|
Loading…
Reference in New Issue
Block a user