Compare commits

...

19 Commits

@ -0,0 +1,16 @@
Here are some links to those who, through no fault of their own, I consider friends of the blog:
https://gavinhoward.com
https://maia.crimew.gay
https://www.themotte.org/
https://www.gleech.org/
https://www.askell.blog/
https://sebastiano.tronto.net/
https://hindenburgresearch.com/
https://blog.tinfoil-hat.net
https://acesounderglass.com/
http://annas-blog.org/
https://cadence.moe/
https://suckless.org/atom.xml
https://niplav.site/services.html
https://philiptrammell.com/blog/

@ -1,14 +1,6 @@
<!--
Send me an email to list@nunosempere.com with subject "Subscribe to blog" and your name.
There previously was a form here, but I think someone was inputting random emails, so that's it for now.
-->
<form method="post" action="https://list.nunosempere.com/subscription/form" class="listmonk-form">
<div>
<h3>Subscribe</h3>
<h3>Sign up</h3>
<input type="hidden" name="nonce" />
<p><input type="email" name="email" required placeholder="E-mail" class="subscribe-input"/></p>
<p><input type="text" name="name" placeholder="Name (helps me filter out malicious entries)" class="subscribe-input"/></ap>
@ -23,5 +15,5 @@ There previously was a form here, but I think someone was inputting random email
</form>
<p>
The reason why I am asking for subscribers' names is explained <a href="https://nunosempere.com/.subscribe/why-name">here</a>. Frequency is roughly once a week.
The reason why I am asking for subscribers' names is explained <a href="https://nunosempere.com/.newsletter/why-name">here</a>. Frequency is roughly once a week.
</p>

@ -0,0 +1,241 @@
Death toll (estimate),Location,Date,Event,Disease,Link,Covid reference class,Did it become endemic?,"Was there a second, third, or fourth wave?/did it last more than one year?",Did it become endemic? (numeric),"Was there a second, third, or fourth wave?/did it last more than one year? (numerical)",Notes,,,,,,
Unknown,"Babylon, or Babirus of the Persians, Central Asia, Mesopotamia and Southern Asia",1200 BC,influenza epidemic,Indian Sanskrit scholars found records of a disease resembling the Flu.,ø,,ø,ø,,,ø,,,,,,
"75,000100,000","Greece, Libya, Egypt, Ethiopia",429426 BC,Plague of Athens,"Unknown, possibly typhus, typhoid fever or viral hemorrhagic fever",https-//en.wikipedia.org/wiki/Plague_of_Athens,Unclear,0,Yes,0,1,"In overcrowded Athens, the disease killed an estimated 25% of the population. The plague returned twice more, in 429 BC and in the winter of 427/426 BC.",,,,,,
Unknown,"Greece (Northern Greece, Roman Republic)",412 BC,412 BC epidemic,"Unknown, possibly influenza",https-//en.wikipedia.org/wiki/412_BC_epidemic,Unclear,0,Yes,0,1,,,,,,,
510 million,Roman Empire,165180 (possibly up to 190),Antonine Plague,"Unknown, possibly smallpox",https-//en.wikipedia.org/wiki/Antonine_Plague,Unclear,0,Yes,0,1,,,,,,,
"1 million+ (Unknown, but at least)",Europe,250266,Plague of Cyprian,"Unknown, possibly smallpox",https-//en.wikipedia.org/wiki/Plague_of_Cyprian,Unclear,0,Yes,0,1,,,,,,,
25100 million; 4050% of population of Europe,Europe and West Asia,541542,Plague of Justinian,Plague,https-//en.wikipedia.org/wiki/Plague_of_Justinian,Unclear,0,Yes,0,1,,,,,,,
,British Isles,664689,Plague of 664,Plague,https-//en.wikipedia.org/wiki/Plague_of_664,Unclear,0,Yes,0,1,,,,,,,
,"Byzantine Empire, West Asia, Syria, Mesopotamia",698701,Plague of 698701,Plague,,,,,,,,,,,,,
2 million (Approx. 13 of entire Japanese population),Japan,735737,735737 Japanese smallpox epidemic,Smallpox,https-//en.wikipedia.org/wiki/735%E2%80%93737_Japanese_smallpox_epidemic,Unclear,Yes,Yes,1,1,,,,,,,
,"Byzantine Empire, West Asia, Africa",746747,Plague of 746747,Plague,,,,,,,,,,,,,
75200 million (1060% of European population),"Europe, Asia and North Africa",13461353,Black Death,Plague Y. pestis,https-//en.wikipedia.org/wiki/Black_Death,Yes,Yes,Yes,1,1,"The physician to the Avignon Papacy, Raimundo Chalmel de Vinario (Latin- Magister Raimundus, lit. 'Master Raymond'), observed the decreasing mortality rate of successive outbreaks of plague in 1347-8, 1362, 1371, and 1382 in his 1382 treatise On Epidemics (De epidemica).[99] In the first outbreak, two thirds of the population contracted the illness and most patients died; in the next, half the population became ill but only some died; by the third, a tenth were affected and many survived; while by the fourth occurrence, only one in twenty people were sickened and most of them survived",,,,,,
"10,000+",Britain (England) and later continental Europe,14851551,Sweating sickness (multiple outbreaks),"Unknown, possibly an unknown species of hantavirus",https-//en.wikipedia.org/wiki/Sweating_sickness,Unclear,No,Yes,0,1,,,,,,,
Unknown Around 1% of those infected,"Asia, North Africa, Europe",1510,1510 Influenza pandemic,Influenza,https-//en.wikipedia.org/wiki/1510_influenza_pandemic,Yes,No,No,0,0,"a mortality rate of around 1%. Fernel and Paré suggest that the 1510 influenza ""spread to almost all countries of the world"" with the exception of the New World",,,,,,
58 million (40% of population),Mexico,1520,1520 Smallpox Epidemic,Smallpox,https-//en.wikipedia.org/wiki/History_of_smallpox_in_Mexico,Unclear,Yes,Yes,1,1,,,,,,,
515 million (80% of population),Mexico,15451548,Cocoliztli Epidemic of 15451548,Possibly Salmonella enterica,https-//en.wikipedia.org/wiki/Cocoliztli_epidemics,Unclear,No,Yes,0,1,,,,,,,
"20,100+ in London",London,15631564,1563 London plague,Plague,https-//en.wikipedia.org/wiki/1563_London_plague,Unclear,No,No,0,0,Radical measures taken,,,,,,
22.5 million (50% of population),Mexico,15761580,Cocoliztli epidemic of 1576,Possibly Salmonella enterica,https-//en.wikipedia.org/wiki/Cocoliztli_epidemics,Unclear,No,Yes,0,1,,,,,,,
,Seneca nation,15921596,,Measles,,,,,,,,,,,,,
3000,Malta,15921593,159293 Malta plague epidemic,Plague,https-//en.wikipedia.org/wiki/1592%E2%80%931593_Malta_plague_epidemic,Unclear,No,Yes,0,1,These measures [were] enforced with harsh penalties including flogging and death,,,,,,
"19,900+ in London and outer parishes",London,15921593,159293 London plague,Plague,https-//en.wikipedia.org/wiki/1592%E2%80%931593_London_plague,Unclear,No,Yes,0,1,,,,,,,
"600,000 to 700,000",Spain,15961602,,Plague,https-//libro.uca.edu/payne1/payne15.htm,Unclear,No,Yes,0,1,,,,,,,
,South America,16001650,,Malaria,ø,Unclear,Unclear,Yes,0.5,1,,,,,,,
,England,1603,,Plague,ø,Unclear,No,No,0,0,,,,,,,
1 million (Britannica),Egypt,1609,,Plague,ø,Unclear,No,No,0,0,,,,,,,
Unknown- estimated 3090% of population,"Southern New England, especially the Wampanoag people",16161620,1616 New England epidemic,"Unknown cause. Latest research suggests epidemic(s) of leptospirosis with Weil syndrome. Classic explanations include yellow fever, bubonic plague, influenza, smallpox, chickenpox, typhus, and syndemic infection of hepatitis B and hepatitis D.",ø,Unclear,No,Yes,0,1,,,,,,,
280000,Italy,16291631,Italian plague of 16291631,Plague,https-//en.wikipedia.org/wiki/1629%E2%80%931631_Italian_plague,Unclear,No,Yes,0,1,A major outbreak in March 1630 was due to relaxed health measures during the carnival season,,,,,,
"15,00025,000",Wyandot people,1634,,Smallpox,,,,,,,,,,,,,
,Thirteen Colonies,1633,Massachusetts smallpox epidemic,Smallpox,https-//en.wikipedia.org/wiki/Massachusetts_smallpox_epidemic,Unclear,Yes,Yes,1,1,,,,,,,
,England,1636,,Plague,,,,,,,,,,,,,
,China,16411644,,Plague,,,,,,,,,,,,,
"600,000 to 700,000",Spain,16471652,Great Plague of Seville,Plague,https-//en.wikipedia.org/wiki/Great_Plague_of_Seville,Unclear,No,Yes,0,1,"In Seville, quarantine measures were evaded, ignored, unproposed and/or unenforced[citation needed]. The results were devastating",,,,,,
,Central America,1648,,Yellow fever,,,,,,,,,,,,,
1250000,Italy,1656,Naples Plague,Plague,https-//en.wikipedia.org/wiki/Naples_Plague,Unclear,No,No,0,0,,,,,,,
,Thirteen Colonies,1657,,Measles,,,,,,,,,,,,,
24148,Netherlands,16631664,,Plague,,,,,,,,,,,,,
100000,England,16651666,Great Plague of London,Plague,https-//en.wikipedia.org/wiki/Great_Plague_of_London,Unclear,No,,0,,"Two suspicious deaths were recorded in St. Giles parish in 1664 and another in February 1665. These did not appear as plague deaths on the Bills of Mortality, so no control measures were taken by the authorities, but the total number of people dying in London during the first four months of 1665 showed a marked increase. By the end of April, only four plague deaths had been recorded, two in the parish of St. Giles, but total deaths per week had risen from around 290 to 398",Tobacco was thought to be a prophylactic and it was later said that no London tobacconist had died from the plague during the epidemic,,,,,
40000,France,1668,,Plague,,,,,,,,,,,,,
11300,Malta,16751676,167576 Malta plague epidemic,Plague,https-//en.wikipedia.org/wiki/1675%E2%80%931676_Malta_plague_epidemic,Unclear,No,Yes,0,1,"Some people disputed the cause of the disease, and the doctor Giuseppe del Cosso insisted that it was not plague but a malignant pricking disease.[6] Many went about their daily lives as usual, and this is believed to be a factor which resulted in such a high death toll.[2] It was only after various European physicians gave their opinions that it was plague that strict containment measures were enforced, but by then it was too late",,,,,,
,Spain,16761685,,Plague,,,,,,,,,,,,,
76000,Austria,1679,Great Plague of Vienna,Plague,https-//en.wikipedia.org/wiki/Great_Plague_of_Vienna,Unclear,No,No,0,0,,,,,,,
,South Africa,1687,,"Unknown, possibly Influenza",,,,,,,,,,,,,
,Thirteen Colonies,1687,,Measles,,,,,,,,,,,,,
,Thirteen Colonies,1690,,Yellow fever,,,,,,,,,,,,,
,"Canada, New France",17021703,,Smallpox,,,,,,,,,,,,,
"18,000+ (36% of population)",Iceland,17071709,Great Smallpox Epidemic,Smallpox,,,,,,,,,,,,,
164000,"Denmark, Sweden, Lithuania",17101712,Great Northern War plague outbreak,Plague,https-//en.wikipedia.org/wiki/Great_Northern_War_plague_outbreak,Unclear,No,Yes,0,1,"While at first the city authorities downplayed the plague, which had reached a peak in early October and then declined, this approach was abandoned when the death toll again started to rise significantly in November","In November 1709, when the Prussian king Frederick I returned to Berlin from a meeting with Russian tsar Peter the Great, the king had a strange encounter with his mentally deranged wife Sophia Louise, who in a white dress and with bloody hands pointed at him saying that the plague would devour the king of Babylon.[48] As there was a legend of a White Lady foretelling the deaths of the Hohenzollern, Frederick took his wife's outburst seriously[49] and ordered that precautions be taken for his residence city.[50] Among other measures, he ordered the construction of a pest house outside the city walls, the Berlin Charité.[50]","In June 1710, most probably via a ship from Pernau, the plague arrived in Stockholm, where the health commission (Collegium Medicum) until 29 August denied that it was indeed the plague, despite buboes being visible on the bodies of victims from the ship and in the town","While Scania was protected from an infection from the north by a cordon sanitaire between it and Småland, the plague came by sea[94] and made landfall not only in Västanå, but also in January 1711 in Domsten in Allerum parish, where the locals had ignored the ban on contact with their relatives and friends on the Danish side of the Sound, most notably in the infected area around Helsingør (Elsinore); the third starting point for the plague in Scania was Ystad, where on 19 June an infected soldier arrived from Swedish Pomerania.[93] The plague remained in Scania until 1713, probably 1714",,,
,Thirteen Colonies,17131715,,Measles,,,,,,,,,,,,,
,"Canada, New France",17141715,,Measles,,,,,,,,,,,,,
"100,000+",France,17201722,Great Plague of Marseille,Plague,https-//en.wikipedia.org/wiki/Great_Plague_of_Marseille,Unclear,No,Yes,0,1,,,,,,,
844,Massachusetts Bay Colony,17211722,1721 Boston smallpox outbreak,Smallpox,https-//en.wikipedia.org/wiki/1721_Boston_smallpox_outbreak,Unclear,No,No,0,0,Early experiments with variolation,,,,,,
,Thirteen Colonies,1729,,Measles,,,,,,,,,,,,,
,Spain,1730,,Yellow fever,,,,,,,,,,,,,
,Thirteen Colonies,17321733,,Influenza,,,,,,,,,,,,,
,"Canada, New France",1733,,Smallpox,,,,,,,,,,,,,
50000,Balkans,1738,Great Plague of 1738,Plague,https-//en.wikipedia.org/wiki/Great_Plague_of_1738,Unclear,No,Yes,0,1,,,,,,,
,Thirteen Colonies,1738,,Smallpox,,,,,,,,,,,,,
,Thirteen Colonies,17391740,,Measles,,,,,,,,,,,,,
,Italy,1743,,Plague,,,,,,,,,,,,,
,Thirteen Colonies,1747,,Measles,,,,,,,,,,,,,
,North America,17551756,,Smallpox,,,,,,,,,,,,,
,North America,1759,,Measles,,,,,,,,,,,,,
,"North America, West Indies",1761,,Influenza,,,,,,,,,,,,,
,"North America, present-day Pittsburgh area.",1763,,Smallpox,,,,,,,,,,,,,
50000,Russia,17701772,Russian plague of 17701772,Plague,https-//en.wikipedia.org/wiki/1770%E2%80%931772_Russian_plague,Unclear,No,Yes,0,1,"Commanding general Christopher von Stoffeln coerced army doctors to conceal the outbreak, which was not made public until Gustav Orreus, a Russian-Finnish surgeon reporting directly to Field Marshal Pyotr Rumyantsev, examined the situation, identified it as plague and enforced quarantine in the troops. Shtoffeln, however, refused to evacuate the infested towns and himself fell victim to the plague in May 1770. Of 1,500 patients recorded in his troops in MayAugust 1770, only 300 survived",Politicking during the outbreak was followed by failure of containment. ,,,,,
,Pacific Northwest natives,1770s,,Smallpox,,,,,,,,,,,,,
,North America,1772,,Measles,,,,,,,,,,,,,
2 million+,Persia,1772,Persian Plague,Plague,https-//en.wikipedia.org/wiki/1772%E2%80%931773_Persian_Plague,Unclear,No,No,0,0,,,,,,,
,England,17751776,,Influenza,,,,,,,,,,,,,
,Spain,1778,,Dengue fever,,,,,,,,,,,,,
,Plains Indians,17801782,North American smallpox epidemic,Smallpox,https-//en.wikipedia.org/wiki/1775%E2%80%931782_North_American_smallpox_epidemic,Unclear,No,Yes,0,1,Examples of inoculation,,,,,,
,Pueblo Indians,1788,,Smallpox,,,,,,,,,,,,,
,United States,1788,,Measles,,,,,,,,,,,,,
,"New South Wales, Australia",17891790,,Smallpox,,,,,,,,,,,,,
,United States,1793,,Influenza and epidemic typhus,,,,,,,,,,,,,
"5,000+",United States,17931798,"Yellow Fever Epidemic of 1793, resurgences",Yellow fever,https-//en.wikipedia.org/wiki/1793_Philadelphia_yellow_fever_epidemic,Unclear,No,No,0,0,,,,,,,
,Spain,18001803,,Yellow fever,,,,,,,,,,,,,
,"Ottoman Empire, Egypt",1801,,Bubonic plague,,,,,,,,,,,,,
,United States,1803,,Yellow fever,,,,,,,,,,,,,
,Egypt,1812,,Plague,,,,,,,,,,,,,
"300,000+",Ottoman Empire,18121819,181219 Ottoman plague epidemic,Plague,https-//en.wikipedia.org/wiki/1812%E2%80%931819_Ottoman_plague_epidemic,Unclear,No,Yes,0,1,,,,,,,
4500,Malta,18131814,181314 Malta plague epidemic,Plague,https-//en.wikipedia.org/wiki/1813%E2%80%931814_Malta_plague_epidemic,Yes,,,,,Low mortality,Disease spread by smugglers,,,,,
60000,Romania,1813,Caragea's plague,Plague,https-//en.wikipedia.org/wiki/Caragea%27s_plague,Unclear,No,No,0,0,,,,,,,
,Ireland,18161819,,Typhus,,,,,,,,,,,,,
"100,000+","Asia, Europe",18161826,First cholera pandemic,Cholera,https-//en.wikipedia.org/wiki/1817%E2%80%931824_cholera_pandemic,Yes,No,Yes,0,1,First cholera pandemic,"Unclear whether there were many waves, or whether it spread from a center. Historians believe that the first pandemic had lingered in Indonesia and the Philippines in 1830.","Cholera was endemic to the lower Ganges River.[1] At festival times, pilgrims frequently contracted the disease there and carried it back to other parts of India on their returns, where it would spread, then subside. The first cholera pandemic started similarly, as an outbreak that was suspected to have begun in 1817 in the town of Jessore.[3] Some epidemiologists and medical historians have suggested that it spread globally through a Hindu pilgrimage, the Kumbh Mela, on the upper Ganges River",,,,
,United States,18201823,,Yellow fever,,,,,,,,,,,,,
,Spain,1821,,Yellow fever,,,,,,,,,,,,,
,"New South Wales, Australia",1828,,Smallpox,,,,,,,,,,,,,
2800,Netherlands,1829,Groningen epidemic,Malaria,https-//en.wikipedia.org/wiki/Groningen_epidemic,Unclear,No,No,0,0,,,,,,,
,South Australia,1829,,Smallpox,,,,,,,,,,,,,
,Iran,18291835,,Bubonic plague,,,,,,,,,,,,,
"100,000+","Asia, Europe, North America",18291851,Second cholera pandemic,Cholera,https-//en.wikipedia.org/wiki/1826%E2%80%931837_cholera_pandemic,Yes,No,Yes,0,1,First cholera pandemic for Europeans,"Like the earlier pandemics, cholera spread from the Ganges Delta of India",,,,,
,Egypt,1831,,Cholera,,,,,,,,,,,,,
,Plains Indians,18311834,,Smallpox,,,,,,,,,,,,,
,"England, France",1832,,Cholera,,,,,,,,,,,,,
,North America,1832,,Cholera,,,,,,,,,,,,,
,United States,1833,,Cholera,,,,,,,,,,,,,
,United States,1834,,Cholera,,,,,,,,,,,,,
,Egypt,18341836,,Bubonic plague,,,,,,,,,,,,,
,United States,1837,,Typhus,,,,,,,,,,,,,
"17,000+",Great Plains,18371838,183738 smallpox epidemic,Smallpox,https-//en.wikipedia.org/wiki/1837_Great_Plains_smallpox_epidemic,Yes,No,Yes,0,1,"the small-pox had never been known in the civilized world, as it had been among the poor Mandans and other Indians. Only twenty-seven Mandans were left to tell the tale",Smallpox may have been intentionally spread among the indigenous people of the Americas by colonizers. Culture War.,,,,,
,Dalmatia,1840,,Plague,,,,,,,,,,,,,
,South Africa,1840,,Smallpox,,,,,,,,,,,,,
,United States,1841,,Yellow fever,,,,,,,,,,,,,
"20,000+",Canada,18471848,Typhus epidemic of 1847,Epidemic typhus,https-//en.wikipedia.org/wiki/1847_North_American_typhus_epidemic,No,No,No,0,0,,,,,,,
,United States,1847,,Yellow fever,,,,,,,,,,,,,
,Worldwide,18471848,,Influenza,,,,,,,,,,,,,
,Egypt,1848,,Cholera,,,,,,,,,,,,,
,North America,18481849,,Cholera,,,,,,,,,,,,,
,United States,1850,,Yellow fever,,,,,,,,,,,,,
,North America,18501851,,Influenza,,,,,,,,,,,,,
,United States,1851,,Cholera,,,,,,,,,,,,,
,United States,1852,,Yellow fever,,,,,,,,,,,,,
1 million+,Russia,18461860,Third cholera pandemic,Cholera,https-//en.wikipedia.org/wiki/1846%E2%80%931860_cholera_pandemic,Unclear,No,Yes,0,1,,,,,,,
,Ottoman Empire,1853,,Plague,,,,,,,,,,,,,
4737,"Copenhagen, Denmark",1853,Cholera epidemic of Copenhagen 1853,Cholera,https-//en.wikipedia.org/wiki/1853_Copenhagen_cholera_outbreak,Unclear,No,No,0,0,Changes made to Copenhagen afterwards,,,,,,
616,England,1854,Broad Street cholera outbreak,Cholera,,,,,,,,,,,,,
,United States,1855,,Yellow fever,,,,,,,,,,,,,
12 million+ in India and China alone,Worldwide,18551860,Third plague pandemic,Bubonic plague,https-//en.wikipedia.org/wiki/Third_plague_pandemic,Unclear,Yes,Yes,1,1,"According to the World Health Organization, the pandemic was considered active until 1960, when worldwide casualties dropped to 200 per year",A natural reservoir or nidus for plague is in western Yunnan and is still an ongoing health risk,"The British colonial government in India pressed medical researcher Waldemar Haffkine to develop a plague vaccine. After three months of persistent work with a limited staff, a form for human trials was ready. On January 10, 1897 Haffkine tested it on himself. After the initial test was reported to the authorities, volunteers at the Byculla jail were used in a control test, all inoculated prisoners survived the epidemics, while seven inmates of the control group died. By the turn of the century, the number of inoculees in India alone reached four million. Haffkine was appointed the Director of the Plague Laboratory (now called Haffkine Institute) in Bombay",,,,
,Portugal,1857,,Yellow fever,,,,,,,,,,,,,
,"Victoria, Australia",1857,,Smallpox,,,,,,,,,,,,,
,"Europe, North America, South America",18571859,,Influenza,,,,,,,,,,,,,
"3,000+","Central Coast, British Columbia",18621863,,Smallpox,,,,,,,,,,,,,
600000,Middle East,18631875,Fourth cholera pandemic,Cholera,https-//en.wikipedia.org/wiki/1863%E2%80%931875_cholera_pandemic,Unclear,No,Yes,0,1,,,,,,,
,Egypt,1865,,Cholera,,,,,,,,,,,,,
,"Russia, Germany",18661867,,Cholera,,,,,,,,,,,,,
,Australia,1867,,Measles,,,,,,,,,,,,,
,Iraq,1867,,Plague,,,,,,,,,,,,,
,Argentina,18521871,,Yellow fever,,,,,,,,,,,,,
,Germany,18701871,,Smallpox,,,,,,,,,,,,,
40000,Fiji,1875,1875 Fiji Measles outbreak,Measles,ø,Unclear,No,No,0,0,,,,,,,
,Russian Empire,1877,,Plague,,,,,,,,,,,,,
,Egypt,1881,,Cholera,,,,,,,,,,,,,
"9,000+","India, Germany",18811896,Fifth cholera pandemic,Cholera,https-//en.wikipedia.org/wiki/1881%E2%80%931896_cholera_pandemic,Unclear,No,Yes,0,1,"Although many residents held the city government responsible for the virulence of the epidemic (leading to cholera riots in 1893[3]), it continued with practices largely unchanged","American author Mark Twain, an avid traveler, visited Hamburg during the cholera outbreak, and he described his experience in a short, uncollected piece dated ""18911892"". Therein, he notes alarmingly the lack of information in Hamburg newspapers about the cholera event, particularly death totals",,,,,
3164,Montreal,1885,,Smallpox,,,,,,,,,,,,,
1 million,Worldwide,18891890,18891890 flu pandemic,Influenza,https-//en.wikipedia.org/wiki/1889%E2%80%931890_flu_pandemic,Yes,No,Yes,0,1,,,,,,,
,West Africa,1900,,Yellow fever,,,,,,,,,,,,,
,Congo Basin,18961906,,Trypanosomiasis,,,,,,,,,,,,,
"800,000+","Europe, Asia, Africa",18991923,Sixth cholera pandemic,Cholera,https-//en.wikipedia.org/wiki/1899%E2%80%931923_cholera_pandemic,Unclear,No,Yes,0,1,,,,,,,
113,San Francisco,19001904,,Bubonic plague,,,,,,,,,,,,,
,Uganda,19001920,,Trypanosomiasis,,,,,,,,,,,,,
,Egypt,1902,,Cholera,,,,,,,,,,,,,
22,India,1903,,Bubonic Plague,,,,,,,,,,,,,
4,Fremantle,1903,,Bubonic plague,,,,,,,,,,,,,
60000,China,19101911,Manchurian plague,Pneumonic plague,https-//en.wikipedia.org/wiki/Manchurian_plague,Unclear,No,No,0,0,"The Chinese government also sought the support of foreign doctors, a number of whom died as a consequence of the disease.[5] In Harbin, this included the Frenchman Gérald Mesny, from the Imperial Medical College in Tientsin, who disputed Wu's recommendation of masks; a few days later, he died after catching the plague when visiting patients without wearing a mask",,,,,,
40000,China,19101912,1910 China plague,Bubonic plague,,,,,,,,,,,,,
1.5 million,Worldwide,19151926,1915 Encephalitis lethargica pandemic,Encephalitis lethargica,https-//en.wikipedia.org/wiki/Encephalitis_lethargica,,,,,,"They would be conscious and aware yet not fully awake; they would sit motionless and speechless all day in their chairs, totally lacking energy, impetus, initiative, motive, appetite, affect or desire; they registered what went on about them without active attention, and with profound indifference. They neither conveyed nor felt the feeling of life; they were as insubstantial as ghosts, and as passive as zombies","The pandemic disappeared in 1927 as abruptly and mysteriously as it first appeared.[21] The great encephalitis pandemic coincided with the 1918 influenza pandemic, and it is likely that the influenza virus potentiated the effects of the encephalitis virus or lowered resistance to it in a catastrophic way",,,,,
"7,000+",United States of America,1916,,Poliomyelitis,,,,,,,,,,,,,
17-100 million,Worldwide,19181920,Spanish flu (pandemic),Influenza A virus subtype H1N1 Spanish Flu Virus,https-//en.wikipedia.org/wiki/Spanish_flu,Yes,No,Yes,0,1,"To maintain morale, World War I censors minimized early reports of illness and mortality in Germany, the United Kingdom, France, and the United States. Newspapers were free to report the epidemic's effects in neutral Spain, such as the grave illness of King Alfonso XIII, and these stories created a false impression of Spain as especially hard hit.","A large factor in the worldwide occurrence of this flu was increased travel. Modern transportation systems made it easier for soldiers, sailors, and civilian travelers to spread the disease.[41] Another was lies and denial by governments, leaving the population ill-prepared to handle the outbreaks",https-//en.wikipedia.org/wiki/File-1918_spanish_flu_waves.gif,"In 1918, older adults may have had partial protection caused by exposure to the 18891890 flu pandemic, known as the ""Russian flu""",Another oddity was that the outbreak was widespread in the summer and autumn (in the Northern Hemisphere); influenza is usually worse in winter,"In New Zealand, 8,573 deaths were attributed to the 1918 pandemic influenza, resulting in a total population fatality rate of 0.7%.[116] Māori were 8 to 10 times as likely to die as other New Zealanders (Pakeha) because of their more crowded living conditions","Despite the high morbidity and mortality rates that resulted from the epidemic, the Spanish flu began to fade from public awareness over the decades until the arrival of news about bird flu and other pandemics in the 1990s and 2000s.[131] This has led some historians to label the Spanish flu a ""forgotten pandemic"""
2.5 million (estimated),Russia,19181922,,Typhus,ø,Unclear,No,Yes,0,1,,,,,,,
30,Los Angeles,1924,1924 Los Angeles pneumonic plague outbreak,Pneumonic plague,,,,,,,,,,,,,
43,"Croydon, United Kingdom",1937,Croydon epidemic of typhoid fever,Typhoid fever,,,,,,,,,,,,,
,Egypt,19421944,,Malaria,,,,,,,,,,,,,
,China,1946,,Bubonic plague,,,,,,,,,,,,,
,Egypt,1946,,Relapsing fever,,,,,,,,,,,,,
1845,United States of America,1946,,Poliomyelitis,,,,,,,,,,,,,
10277,Egypt,1947,,Cholera,,,,,,,,,,,,,
2720,United States of America,1949,,Poliomyelitis,,,,,,,,,,,,,
3145,United States of America,1952,,Poliomyelitis,,,,,,,,,,,,,
1-4 million,Worldwide,19571958,Asian flu,Influenza A virus subtype H2N2,https-//en.wikipedia.org/wiki/1957%E2%80%931958_influenza_pandemic,Yes,Yes,Yes,1,1,Low mortality,,,,,,
,Worldwide,19611975,Seventh cholera pandemic,Cholera (El Tor strain),https-//en.wikipedia.org/wiki/1961%E2%80%931975_cholera_pandemic,Unclear,Yes,Yes,1,1,,,,,,,
500 million,Worldwide,18771977,,Smallpox,https-//en.wikipedia.org/wiki/Smallpox,Yes,Yes,Yes,1,1,"In 2017, Canadian scientists recreated an extinct horse pox virus to demonstrate that the smallpox virus can be recreated in a small lab at a cost of about $100,000, by a team of scientists without specialist knowledge",,,,,,
1-4 million,Worldwide,19681970,Hong Kong flu,Influenza A virus subtype H3N2,https-//en.wikipedia.org/wiki/Hong_Kong_flu,Unclear,No,Yes,0,1,,,,,,,
5,Netherlands,1971,,Poliomyelitis,,,,,,,,,,,,,
35,Yugoslavia,1972,1972 outbreak of smallpox in Yugoslavia,Smallpox,,,,,,,,,,,,,
1027,United States,19721973,London flu,Influenza A virus subtype H3N2,,,,,,,,,,,,,
24,Italy,1973,,Cholera (El Tor strain),,,,,,,,,,,,,
15000,India,1974,1974 smallpox epidemic of India,Smallpox,https-//en.wikipedia.org/wiki/1974_smallpox_epidemic_in_India,No,Fuck no,No,Fuck 0,0,,,,,,,
32 million+ (23.643.8 million),Worldwide,1981present (data as of 2018),HIV/AIDS pandemic,HIV/AIDS,https-//en.wikipedia.org/wiki/Epidemiology_of_HIV/AIDS,,,,,,,,,,,,
64,Western Sahara,1984,,Plague,,,,,,,,,,,,,
"8,4109,432",Bangladesh,1991,,Cholera,,,,,,,,,,,,,
52,India,1994,1994 plague epidemic in Surat,Plague,,,,,,,,,,,,,
231,Worldwide,19962001,United Kingdom BSE outbreak,vCJD,,,,,,,,,,,,,
10000,West Africa,1996,,Meningitis,,,,,,,,,,,,,
105,Malaysia,19981999,199899 Malaysia Nipah virus outbreak,Nipah virus infection,,,,,,,,,,,,,
ca. 40+,Central America,2000,,Dengue fever,,,,,,,,,,,,,
400+,Nigeria,2001,,Cholera,,,,,,,,,,,,,
139,South Africa,2001,,Cholera,,,,,,,,,,,,,
774,Worldwide,20022004,200204 SARS outbreak,Severe acute respiratory syndrome (SARS),,,,,,,,,,,,,
1 (18 cases),Algeria,2003,,Plague,,,,,,,,,,,,,
"0 (3,958 cases)",Afghanistan,2004,,Leishmaniasis,,,,,,,,,,,,,
"17,000 cases; mortality typically 1%",Bangladesh,2004,,Cholera,,,,,,,,,,,,,
658,Indonesia,2004,,Dengue fever,,,,,,,,,,,,,
2,Senegal,2004,,Cholera,,,,,,,,,,,,,
7,Sudan,2004,,Ebola,,,,,,,,,,,,,
14,Mali,2005,,Yellow fever,,,,,,,,,,,,,
27,Singapore,2005,2005 dengue outbreak in Singapore,Dengue fever,,,,,,,,,,,,,
"1,200+","Luanda, Angola",2006,,Cholera,,,,,,,,,,,,,
61,"Ituri Province, Democratic Republic of the Congo",2006,,Plague,,,,,,,,,,,,,
17,India,2006,,Malaria,,,,,,,,,,,,,
50+,India,2006,2006 dengue outbreak in India,Dengue fever,,,,,,,,,,,,,
Unknown (cases very numerous and widespread),India,2006,Chikungunya outbreaks,Chikungunya virus,,,,,,,,,,,,,
50+,Pakistan,2006,2006 dengue outbreak in Pakistan,Dengue fever,,,,,,,,,,,,,
"ca. 1,000",Philippines,2006,,Dengue fever,,,,,,,,,,,,,
394,East Africa,2006,200607 East Africa Rift Valley fever outbreak,Rift Valley fever,,,,,,,,,,,,,
187,Democratic Republic of the Congo,2007,Mweka ebola epidemic,Ebola,,,,,,,,,,,,,
684,Ethiopia,2007,,Cholera,,,,,,,,,,,,,
49,India,2008,,Cholera,,,,,,,,,,,,,
10,Iraq,2007,2007 Iraq cholera outbreak,Cholera,,,,,,,,,,,,,
Unknown (69 cases),Nigeria,2007,,Poliomyelitis,,,,,,,,,,,,,
183,Puerto Rico; Dominican Republic; Mexico,2007,,Dengue fever,,,,,,,,,,,,,
"Perhaps 1.5% of 1,200 cases (18)/ 150 in another source",Somalia,2007,,Cholera,,,,,,,,,,,,,
37,Uganda,2007,,Ebola,,,,,,,,,,,,,
,Vietnam,2007,,Cholera,,,,,,,,,,,,,
,Brazil,2008,,Dengue fever,,,,,,,,,,,,,
,Cambodia,2008,,Dengue fever,,,,,,,,,,,,,
,Chad,2008,,Cholera,,,,,,,,,,,,,
,China,20082017,,"Hand, foot, and mouth disease",,,,,,,,,,,,,
18+,Madagascar,2008,,Bubonic plague,,,,,,,,,,,,,
172,Philippines,2008,,Dengue fever,,,,,,,,,,,,,
0,Vietnam,2008,,Cholera,,,,,,,,,,,,,
4293,Zimbabwe,20082009,200809 Zimbabwean cholera outbreak,Cholera,,,,,,,,,,,,,
18,Bolivia,2009,2009 Bolivian dengue fever epidemic,Dengue fever,,,,,,,,,,,,,
49,India,2009,2009 Gujarat hepatitis outbreak,Hepatitis B,,,,,,,,,,,,,
1+ (503 cases),"Queensland, Australia",2009,,Dengue fever,,,,,,,,,,,,,
,Worldwide,2009,Mumps outbreaks in the 2000s,Mumps,,,,,,,,,,,,,
1100,West Africa,20092010,200910 West African meningitis outbreak,Meningitis,,,,,,,,,,,,,
"151,700-575,400",Worldwide,20092010,"2009 flu pandemic (informally called ""swine flu"")",Pandemic H1N1/09 virus,Yes,No,No,New virus; no immunity,0,New virus; 0 immunity,,,,,,,
"10,075 (May 2017)",Hispaniola,2010present,Haiti cholera outbreak,"Cholera (strain serogroup O1, serotype Ogawa)",,,,,,,,,,,,,
"4,500+",Democratic Republic of the Congo,20102014,,Measles,,,,,,,,,,,,,
170,Vietnam,2011present,,"Hand, foot and mouth disease",,,,,,,,,,,,,
350+,Pakistan,2011,2011 dengue outbreak in Pakistan,Dengue fever,,,,,,,,,,,,,
171,Darfur Sudan,2012,"2012 yellow fever outbreak in Darfur, Sudan",Yellow fever,,,,,,,,,,,,,
862 (as of 13 January 2020),Worldwide,2012present,2012 Middle East respiratory syndrome coronavirus outbreak,Middle East respiratory syndrome (MERS),,,,,,,,,,,,,
142,Vietnam,20132014,,Measles,,,,,,,,,,,,,
"11,300+","Worldwide, primarily concentrated in Guinea, Liberia, Sierra Leone",20132016,Ebola virus epidemic in West Africa,Ebola virus disease Ebola virus virion,Yes,No,Yes,New virus; no immunity,1,New virus; 0 immunity,,,,,,,
183,Americas,20132015,201314 chikungunya outbreak,Chikungunya,,,,,,,,,,,,,
292,Madagascar,20142017,2014 Madagascar plague outbreak,Bubonic plague,,,,,,,,,,,,,
36,India,20142015,2014 Odisha jaundice outbreak,"Primarily Hepatitis E, but also Hepatitis A",,,,,,,,,,,,,
2035,India,2015,2015 Indian swine flu outbreak,Influenza A virus subtype H1N1,,,,,,,,,,,,,
~53,Worldwide,20152016,201516 Zika virus epidemic,Zika virus,,,,,,,,,,,,,
100s (as of 1 April 2016),"Angola, DR Congo, China, Kenya",2016,2016 yellow fever outbreak in Angola,Yellow fever,,,,,,,,,,,,,
"3,886 (as of 30 November 2019)",Yemen,2016present,201620 Yemen cholera outbreak,Cholera,,,,,,,,,,,,,
1317,India,2017,2017 Gorakhpur Japanese encephalitis outbreak,Japanese encephalitis,,,,,,,,,,,,,
"60,00080,000+",United States,20172018,201718 United States flu season,Seasonal influenza,Unclear,Yes,Yes,,1,,,,,,,,
17,India,2018,2018 Nipah virus outbreak in Kerala,Nipah virus infection,,,,,,,,,,,,,
"2,271 (as of 26 April 2020)",Democratic Republic of the Congo & Uganda,2018present,201820 Kivu Ebola epidemic,Ebola virus disease,,,,,,,,,,,,,
"6,400+ (as of April 2020)",Democratic Republic of the Congo,2019present,2019 measles outbreak in the Democratic Republic of the Congo,Measles,,,,,,,,,,,,,
83,Samoa,2019present,2019 Samoa measles outbreak,Measles,,,,,,,,,,,,,
"3,700+","Asia-Pacific, Latin America",2019present,201920 dengue fever epidemic,Dengue fever,,,,,,,,,,,,,
"258,354 (As of 6 May 2020)",Worldwide,2019present,COVID-19 pandemic,COVID-19 / SARS-CoV-2,,,,,,,,,,,,,
1 Death toll (estimate) Location Date Event Disease Link Covid reference class Did it become endemic? Was there a second, third, or fourth wave?/did it last more than one year? Did it become endemic? (numeric) Was there a second, third, or fourth wave?/did it last more than one year? (numerical) Notes
2 Unknown Babylon, or Babirus of the Persians, Central Asia, Mesopotamia and Southern Asia 1200 BC influenza epidemic Indian Sanskrit scholars found records of a disease resembling the Flu. ø ø ø ø
3 75,000–100,000 Greece, Libya, Egypt, Ethiopia 429–426 BC Plague of Athens Unknown, possibly typhus, typhoid fever or viral hemorrhagic fever https-//en.wikipedia.org/wiki/Plague_of_Athens Unclear 0 Yes 0 1 In overcrowded Athens, the disease killed an estimated 25% of the population. The plague returned twice more, in 429 BC and in the winter of 427/426 BC.
4 Unknown Greece (Northern Greece, Roman Republic) 412 BC 412 BC epidemic Unknown, possibly influenza https-//en.wikipedia.org/wiki/412_BC_epidemic Unclear 0 Yes 0 1
5 5–10 million Roman Empire 165–180 (possibly up to 190) Antonine Plague Unknown, possibly smallpox https-//en.wikipedia.org/wiki/Antonine_Plague Unclear 0 Yes 0 1
6 1 million+ (Unknown, but at least) Europe 250–266 Plague of Cyprian Unknown, possibly smallpox https-//en.wikipedia.org/wiki/Plague_of_Cyprian Unclear 0 Yes 0 1
7 25–100 million; 40–50% of population of Europe Europe and West Asia 541–542 Plague of Justinian Plague https-//en.wikipedia.org/wiki/Plague_of_Justinian Unclear 0 Yes 0 1
8 British Isles 664–689 Plague of 664 Plague https-//en.wikipedia.org/wiki/Plague_of_664 Unclear 0 Yes 0 1
9 Byzantine Empire, West Asia, Syria, Mesopotamia 698–701 Plague of 698–701 Plague
10 2 million (Approx. ​1⁄3 of entire Japanese population) Japan 735–737 735–737 Japanese smallpox epidemic Smallpox https-//en.wikipedia.org/wiki/735%E2%80%93737_Japanese_smallpox_epidemic Unclear Yes Yes 1 1
11 Byzantine Empire, West Asia, Africa 746–747 Plague of 746–747 Plague
12 75–200 million (10–60% of European population) Europe, Asia and North Africa 1346–1353 Black Death Plague Y. pestis https-//en.wikipedia.org/wiki/Black_Death Yes Yes Yes 1 1 The physician to the Avignon Papacy, Raimundo Chalmel de Vinario (Latin- Magister Raimundus, lit. 'Master Raymond'), observed the decreasing mortality rate of successive outbreaks of plague in 1347-8, 1362, 1371, and 1382 in his 1382 treatise On Epidemics (De epidemica).[99] In the first outbreak, two thirds of the population contracted the illness and most patients died; in the next, half the population became ill but only some died; by the third, a tenth were affected and many survived; while by the fourth occurrence, only one in twenty people were sickened and most of them survived
13 10,000+ Britain (England) and later continental Europe 1485–1551 Sweating sickness (multiple outbreaks) Unknown, possibly an unknown species of hantavirus https-//en.wikipedia.org/wiki/Sweating_sickness Unclear No Yes 0 1
14 Unknown Around 1% of those infected Asia, North Africa, Europe 1510 1510 Influenza pandemic Influenza https-//en.wikipedia.org/wiki/1510_influenza_pandemic Yes No No 0 0 a mortality rate of around 1%. Fernel and Paré suggest that the 1510 influenza "spread to almost all countries of the world" with the exception of the New World
15 5–8 million (40% of population) Mexico 1520 1520 Smallpox Epidemic Smallpox https-//en.wikipedia.org/wiki/History_of_smallpox_in_Mexico Unclear Yes Yes 1 1
16 5–15 million (80% of population) Mexico 1545–1548 Cocoliztli Epidemic of 1545–1548 Possibly Salmonella enterica https-//en.wikipedia.org/wiki/Cocoliztli_epidemics Unclear No Yes 0 1
17 20,100+ in London London 1563–1564 1563 London plague Plague https-//en.wikipedia.org/wiki/1563_London_plague Unclear No No 0 0 Radical measures taken
18 2–2.5 million (50% of population) Mexico 1576–1580 Cocoliztli epidemic of 1576 Possibly Salmonella enterica https-//en.wikipedia.org/wiki/Cocoliztli_epidemics Unclear No Yes 0 1
19 Seneca nation 1592–1596 Measles
20 3000 Malta 1592–1593 1592–93 Malta plague epidemic Plague https-//en.wikipedia.org/wiki/1592%E2%80%931593_Malta_plague_epidemic Unclear No Yes 0 1 These measures [were] enforced with harsh penalties including flogging and death
21 19,900+ in London and outer parishes London 1592–1593 1592–93 London plague Plague https-//en.wikipedia.org/wiki/1592%E2%80%931593_London_plague Unclear No Yes 0 1
22 600,000 to 700,000 Spain 1596–1602 Plague https-//libro.uca.edu/payne1/payne15.htm Unclear No Yes 0 1
23 South America 1600–1650 Malaria ø Unclear Unclear Yes 0.5 1
24 England 1603 Plague ø Unclear No No 0 0
25 1 million (Britannica) Egypt 1609 Plague ø Unclear No No 0 0
26 Unknown- estimated 30–90% of population Southern New England, especially the Wampanoag people 1616–1620 1616 New England epidemic Unknown cause. Latest research suggests epidemic(s) of leptospirosis with Weil syndrome. Classic explanations include yellow fever, bubonic plague, influenza, smallpox, chickenpox, typhus, and syndemic infection of hepatitis B and hepatitis D. ø Unclear No Yes 0 1
27 280000 Italy 1629–1631 Italian plague of 1629–1631 Plague https-//en.wikipedia.org/wiki/1629%E2%80%931631_Italian_plague Unclear No Yes 0 1 A major outbreak in March 1630 was due to relaxed health measures during the carnival season
28 15,000–25,000 Wyandot people 1634 Smallpox
29 Thirteen Colonies 1633 Massachusetts smallpox epidemic Smallpox https-//en.wikipedia.org/wiki/Massachusetts_smallpox_epidemic Unclear Yes Yes 1 1
30 England 1636 Plague
31 China 1641–1644 Plague
32 600,000 to 700,000 Spain 1647–1652 Great Plague of Seville Plague https-//en.wikipedia.org/wiki/Great_Plague_of_Seville Unclear No Yes 0 1 In Seville, quarantine measures were evaded, ignored, unproposed and/or unenforced[citation needed]. The results were devastating
33 Central America 1648 Yellow fever
34 1250000 Italy 1656 Naples Plague Plague https-//en.wikipedia.org/wiki/Naples_Plague Unclear No No 0 0
35 Thirteen Colonies 1657 Measles
36 24148 Netherlands 1663–1664 Plague
37 100000 England 1665–1666 Great Plague of London Plague https-//en.wikipedia.org/wiki/Great_Plague_of_London Unclear No 0 Two suspicious deaths were recorded in St. Giles parish in 1664 and another in February 1665. These did not appear as plague deaths on the Bills of Mortality, so no control measures were taken by the authorities, but the total number of people dying in London during the first four months of 1665 showed a marked increase. By the end of April, only four plague deaths had been recorded, two in the parish of St. Giles, but total deaths per week had risen from around 290 to 398 Tobacco was thought to be a prophylactic and it was later said that no London tobacconist had died from the plague during the epidemic
38 40000 France 1668 Plague
39 11300 Malta 1675–1676 1675–76 Malta plague epidemic Plague https-//en.wikipedia.org/wiki/1675%E2%80%931676_Malta_plague_epidemic Unclear No Yes 0 1 Some people disputed the cause of the disease, and the doctor Giuseppe del Cosso insisted that it was not plague but a malignant pricking disease.[6] Many went about their daily lives as usual, and this is believed to be a factor which resulted in such a high death toll.[2] It was only after various European physicians gave their opinions that it was plague that strict containment measures were enforced, but by then it was too late
40 Spain 1676–1685 Plague
41 76000 Austria 1679 Great Plague of Vienna Plague https-//en.wikipedia.org/wiki/Great_Plague_of_Vienna Unclear No No 0 0
42 South Africa 1687 Unknown, possibly Influenza
43 Thirteen Colonies 1687 Measles
44 Thirteen Colonies 1690 Yellow fever
45 Canada, New France 1702–1703 Smallpox
46 18,000+ (36% of population) Iceland 1707–1709 Great Smallpox Epidemic Smallpox
47 164000 Denmark, Sweden, Lithuania 1710–1712 Great Northern War plague outbreak Plague https-//en.wikipedia.org/wiki/Great_Northern_War_plague_outbreak Unclear No Yes 0 1 While at first the city authorities downplayed the plague, which had reached a peak in early October and then declined, this approach was abandoned when the death toll again started to rise significantly in November In November 1709, when the Prussian king Frederick I returned to Berlin from a meeting with Russian tsar Peter the Great, the king had a strange encounter with his mentally deranged wife Sophia Louise, who in a white dress and with bloody hands pointed at him saying that the plague would devour the king of Babylon.[48] As there was a legend of a White Lady foretelling the deaths of the Hohenzollern, Frederick took his wife's outburst seriously[49] and ordered that precautions be taken for his residence city.[50] Among other measures, he ordered the construction of a pest house outside the city walls, the Berlin Charité.[50] In June 1710, most probably via a ship from Pernau, the plague arrived in Stockholm, where the health commission (Collegium Medicum) until 29 August denied that it was indeed the plague, despite buboes being visible on the bodies of victims from the ship and in the town While Scania was protected from an infection from the north by a cordon sanitaire between it and Småland, the plague came by sea[94] and made landfall not only in Västanå, but also in January 1711 in Domsten in Allerum parish, where the locals had ignored the ban on contact with their relatives and friends on the Danish side of the Sound, most notably in the infected area around Helsingør (Elsinore); the third starting point for the plague in Scania was Ystad, where on 19 June an infected soldier arrived from Swedish Pomerania.[93] The plague remained in Scania until 1713, probably 1714
48 Thirteen Colonies 1713–1715 Measles
49 Canada, New France 1714–1715 Measles
50 100,000+ France 1720–1722 Great Plague of Marseille Plague https-//en.wikipedia.org/wiki/Great_Plague_of_Marseille Unclear No Yes 0 1
51 844 Massachusetts Bay Colony 1721–1722 1721 Boston smallpox outbreak Smallpox https-//en.wikipedia.org/wiki/1721_Boston_smallpox_outbreak Unclear No No 0 0 Early experiments with variolation
52 Thirteen Colonies 1729 Measles
53 Spain 1730 Yellow fever
54 Thirteen Colonies 1732–1733 Influenza
55 Canada, New France 1733 Smallpox
56 50000 Balkans 1738 Great Plague of 1738 Plague https-//en.wikipedia.org/wiki/Great_Plague_of_1738 Unclear No Yes 0 1
57 Thirteen Colonies 1738 Smallpox
58 Thirteen Colonies 1739–1740 Measles
59 Italy 1743 Plague
60 Thirteen Colonies 1747 Measles
61 North America 1755–1756 Smallpox
62 North America 1759 Measles
63 North America, West Indies 1761 Influenza
64 North America, present-day Pittsburgh area. 1763 Smallpox
65 50000 Russia 1770–1772 Russian plague of 1770–1772 Plague https-//en.wikipedia.org/wiki/1770%E2%80%931772_Russian_plague Unclear No Yes 0 1 Commanding general Christopher von Stoffeln coerced army doctors to conceal the outbreak, which was not made public until Gustav Orreus, a Russian-Finnish surgeon reporting directly to Field Marshal Pyotr Rumyantsev, examined the situation, identified it as plague and enforced quarantine in the troops. Shtoffeln, however, refused to evacuate the infested towns and himself fell victim to the plague in May 1770. Of 1,500 patients recorded in his troops in May–August 1770, only 300 survived Politicking during the outbreak was followed by failure of containment.
66 Pacific Northwest natives 1770s Smallpox
67 North America 1772 Measles
68 2 million+ Persia 1772 Persian Plague Plague https-//en.wikipedia.org/wiki/1772%E2%80%931773_Persian_Plague Unclear No No 0 0
69 England 1775–1776 Influenza
70 Spain 1778 Dengue fever
71 Plains Indians 1780–1782 North American smallpox epidemic Smallpox https-//en.wikipedia.org/wiki/1775%E2%80%931782_North_American_smallpox_epidemic Unclear No Yes 0 1 Examples of inoculation
72 Pueblo Indians 1788 Smallpox
73 United States 1788 Measles
74 New South Wales, Australia 1789–1790 Smallpox
75 United States 1793 Influenza and epidemic typhus
76 5,000+ United States 1793–1798 Yellow Fever Epidemic of 1793, resurgences Yellow fever https-//en.wikipedia.org/wiki/1793_Philadelphia_yellow_fever_epidemic Unclear No No 0 0
77 Spain 1800–1803 Yellow fever
78 Ottoman Empire, Egypt 1801 Bubonic plague
79 United States 1803 Yellow fever
80 Egypt 1812 Plague
81 300,000+ Ottoman Empire 1812–1819 1812–19 Ottoman plague epidemic Plague https-//en.wikipedia.org/wiki/1812%E2%80%931819_Ottoman_plague_epidemic Unclear No Yes 0 1
82 4500 Malta 1813–1814 1813–14 Malta plague epidemic Plague https-//en.wikipedia.org/wiki/1813%E2%80%931814_Malta_plague_epidemic Yes Low mortality Disease spread by smugglers
83 60000 Romania 1813 Caragea's plague Plague https-//en.wikipedia.org/wiki/Caragea%27s_plague Unclear No No 0 0
84 Ireland 1816–1819 Typhus
85 100,000+ Asia, Europe 1816–1826 First cholera pandemic Cholera https-//en.wikipedia.org/wiki/1817%E2%80%931824_cholera_pandemic Yes No Yes 0 1 First cholera pandemic Unclear whether there were many waves, or whether it spread from a center. Historians believe that the first pandemic had lingered in Indonesia and the Philippines in 1830. Cholera was endemic to the lower Ganges River.[1] At festival times, pilgrims frequently contracted the disease there and carried it back to other parts of India on their returns, where it would spread, then subside. The first cholera pandemic started similarly, as an outbreak that was suspected to have begun in 1817 in the town of Jessore.[3] Some epidemiologists and medical historians have suggested that it spread globally through a Hindu pilgrimage, the Kumbh Mela, on the upper Ganges River
86 United States 1820–1823 Yellow fever
87 Spain 1821 Yellow fever
88 New South Wales, Australia 1828 Smallpox
89 2800 Netherlands 1829 Groningen epidemic Malaria https-//en.wikipedia.org/wiki/Groningen_epidemic Unclear No No 0 0
90 South Australia 1829 Smallpox
91 Iran 1829–1835 Bubonic plague
92 100,000+ Asia, Europe, North America 1829–1851 Second cholera pandemic Cholera https-//en.wikipedia.org/wiki/1826%E2%80%931837_cholera_pandemic Yes No Yes 0 1 First cholera pandemic for Europeans Like the earlier pandemics, cholera spread from the Ganges Delta of India
93 Egypt 1831 Cholera
94 Plains Indians 1831–1834 Smallpox
95 England, France 1832 Cholera
96 North America 1832 Cholera
97 United States 1833 Cholera
98 United States 1834 Cholera
99 Egypt 1834–1836 Bubonic plague
100 United States 1837 Typhus
101 17,000+ Great Plains 1837–1838 1837–38 smallpox epidemic Smallpox https-//en.wikipedia.org/wiki/1837_Great_Plains_smallpox_epidemic Yes No Yes 0 1 the small-pox had never been known in the civilized world, as it had been among the poor Mandans and other Indians. Only twenty-seven Mandans were left to tell the tale Smallpox may have been intentionally spread among the indigenous people of the Americas by colonizers. Culture War.
102 Dalmatia 1840 Plague
103 South Africa 1840 Smallpox
104 United States 1841 Yellow fever
105 20,000+ Canada 1847–1848 Typhus epidemic of 1847 Epidemic typhus https-//en.wikipedia.org/wiki/1847_North_American_typhus_epidemic No No No 0 0
106 United States 1847 Yellow fever
107 Worldwide 1847–1848 Influenza
108 Egypt 1848 Cholera
109 North America 1848–1849 Cholera
110 United States 1850 Yellow fever
111 North America 1850–1851 Influenza
112 United States 1851 Cholera
113 United States 1852 Yellow fever
114 1 million+ Russia 1846–1860 Third cholera pandemic Cholera https-//en.wikipedia.org/wiki/1846%E2%80%931860_cholera_pandemic Unclear No Yes 0 1
115 Ottoman Empire 1853 Plague
116 4737 Copenhagen, Denmark 1853 Cholera epidemic of Copenhagen 1853 Cholera https-//en.wikipedia.org/wiki/1853_Copenhagen_cholera_outbreak Unclear No No 0 0 Changes made to Copenhagen afterwards
117 616 England 1854 Broad Street cholera outbreak Cholera
118 United States 1855 Yellow fever
119 12 million+ in India and China alone Worldwide 1855–1860 Third plague pandemic Bubonic plague https-//en.wikipedia.org/wiki/Third_plague_pandemic Unclear Yes Yes 1 1 According to the World Health Organization, the pandemic was considered active until 1960, when worldwide casualties dropped to 200 per year A natural reservoir or nidus for plague is in western Yunnan and is still an ongoing health risk The British colonial government in India pressed medical researcher Waldemar Haffkine to develop a plague vaccine. After three months of persistent work with a limited staff, a form for human trials was ready. On January 10, 1897 Haffkine tested it on himself. After the initial test was reported to the authorities, volunteers at the Byculla jail were used in a control test, all inoculated prisoners survived the epidemics, while seven inmates of the control group died. By the turn of the century, the number of inoculees in India alone reached four million. Haffkine was appointed the Director of the Plague Laboratory (now called Haffkine Institute) in Bombay
120 Portugal 1857 Yellow fever
121 Victoria, Australia 1857 Smallpox
122 Europe, North America, South America 1857–1859 Influenza
123 3,000+ Central Coast, British Columbia 1862–1863 Smallpox
124 600000 Middle East 1863–1875 Fourth cholera pandemic Cholera https-//en.wikipedia.org/wiki/1863%E2%80%931875_cholera_pandemic Unclear No Yes 0 1
125 Egypt 1865 Cholera
126 Russia, Germany 1866–1867 Cholera
127 Australia 1867 Measles
128 Iraq 1867 Plague
129 Argentina 1852–1871 Yellow fever
130 Germany 1870–1871 Smallpox
131 40000 Fiji 1875 1875 Fiji Measles outbreak Measles ø Unclear No No 0 0
132 Russian Empire 1877 Plague
133 Egypt 1881 Cholera
134 9,000+ India, Germany 1881–1896 Fifth cholera pandemic Cholera https-//en.wikipedia.org/wiki/1881%E2%80%931896_cholera_pandemic Unclear No Yes 0 1 Although many residents held the city government responsible for the virulence of the epidemic (leading to cholera riots in 1893[3]), it continued with practices largely unchanged American author Mark Twain, an avid traveler, visited Hamburg during the cholera outbreak, and he described his experience in a short, uncollected piece dated "1891–1892". Therein, he notes alarmingly the lack of information in Hamburg newspapers about the cholera event, particularly death totals
135 3164 Montreal 1885 Smallpox
136 1 million Worldwide 1889–1890 1889–1890 flu pandemic Influenza https-//en.wikipedia.org/wiki/1889%E2%80%931890_flu_pandemic Yes No Yes 0 1
137 West Africa 1900 Yellow fever
138 Congo Basin 1896–1906 Trypanosomiasis
139 800,000+ Europe, Asia, Africa 1899–1923 Sixth cholera pandemic Cholera https-//en.wikipedia.org/wiki/1899%E2%80%931923_cholera_pandemic Unclear No Yes 0 1
140 113 San Francisco 1900–1904 Bubonic plague
141 Uganda 1900–1920 Trypanosomiasis
142 Egypt 1902 Cholera
143 22 India 1903 Bubonic Plague
144 4 Fremantle 1903 Bubonic plague
145 60000 China 1910–1911 Manchurian plague Pneumonic plague https-//en.wikipedia.org/wiki/Manchurian_plague Unclear No No 0 0 The Chinese government also sought the support of foreign doctors, a number of whom died as a consequence of the disease.[5] In Harbin, this included the Frenchman Gérald Mesny, from the Imperial Medical College in Tientsin, who disputed Wu's recommendation of masks; a few days later, he died after catching the plague when visiting patients without wearing a mask
146 40000 China 1910–1912 1910 China plague Bubonic plague
147 1.5 million Worldwide 1915–1926 1915 Encephalitis lethargica pandemic Encephalitis lethargica https-//en.wikipedia.org/wiki/Encephalitis_lethargica They would be conscious and aware – yet not fully awake; they would sit motionless and speechless all day in their chairs, totally lacking energy, impetus, initiative, motive, appetite, affect or desire; they registered what went on about them without active attention, and with profound indifference. They neither conveyed nor felt the feeling of life; they were as insubstantial as ghosts, and as passive as zombies The pandemic disappeared in 1927 as abruptly and mysteriously as it first appeared.[21] The great encephalitis pandemic coincided with the 1918 influenza pandemic, and it is likely that the influenza virus potentiated the effects of the encephalitis virus or lowered resistance to it in a catastrophic way
148 7,000+ United States of America 1916 Poliomyelitis
149 17-100 million Worldwide 1918–1920 Spanish flu (pandemic) Influenza A virus subtype H1N1 Spanish Flu Virus https-//en.wikipedia.org/wiki/Spanish_flu Yes No Yes 0 1 To maintain morale, World War I censors minimized early reports of illness and mortality in Germany, the United Kingdom, France, and the United States. Newspapers were free to report the epidemic's effects in neutral Spain, such as the grave illness of King Alfonso XIII, and these stories created a false impression of Spain as especially hard hit. A large factor in the worldwide occurrence of this flu was increased travel. Modern transportation systems made it easier for soldiers, sailors, and civilian travelers to spread the disease.[41] Another was lies and denial by governments, leaving the population ill-prepared to handle the outbreaks https-//en.wikipedia.org/wiki/File-1918_spanish_flu_waves.gif In 1918, older adults may have had partial protection caused by exposure to the 1889–1890 flu pandemic, known as the "Russian flu" Another oddity was that the outbreak was widespread in the summer and autumn (in the Northern Hemisphere); influenza is usually worse in winter In New Zealand, 8,573 deaths were attributed to the 1918 pandemic influenza, resulting in a total population fatality rate of 0.7%.[116] Māori were 8 to 10 times as likely to die as other New Zealanders (Pakeha) because of their more crowded living conditions Despite the high morbidity and mortality rates that resulted from the epidemic, the Spanish flu began to fade from public awareness over the decades until the arrival of news about bird flu and other pandemics in the 1990s and 2000s.[131] This has led some historians to label the Spanish flu a "forgotten pandemic"
150 2.5 million (estimated) Russia 1918–1922 Typhus ø Unclear No Yes 0 1
151 30 Los Angeles 1924 1924 Los Angeles pneumonic plague outbreak Pneumonic plague
152 43 Croydon, United Kingdom 1937 Croydon epidemic of typhoid fever Typhoid fever
153 Egypt 1942–1944 Malaria
154 China 1946 Bubonic plague
155 Egypt 1946 Relapsing fever
156 1845 United States of America 1946 Poliomyelitis
157 10277 Egypt 1947 Cholera
158 2720 United States of America 1949 Poliomyelitis
159 3145 United States of America 1952 Poliomyelitis
160 1-4 million Worldwide 1957–1958 Asian flu Influenza A virus subtype H2N2 https-//en.wikipedia.org/wiki/1957%E2%80%931958_influenza_pandemic Yes Yes Yes 1 1 Low mortality
161 Worldwide 1961–1975 Seventh cholera pandemic Cholera (El Tor strain) https-//en.wikipedia.org/wiki/1961%E2%80%931975_cholera_pandemic Unclear Yes Yes 1 1
162 500 million Worldwide 1877–1977 Smallpox https-//en.wikipedia.org/wiki/Smallpox Yes Yes Yes 1 1 In 2017, Canadian scientists recreated an extinct horse pox virus to demonstrate that the smallpox virus can be recreated in a small lab at a cost of about $100,000, by a team of scientists without specialist knowledge
163 1-4 million Worldwide 1968–1970 Hong Kong flu Influenza A virus subtype H3N2 https-//en.wikipedia.org/wiki/Hong_Kong_flu Unclear No Yes 0 1
164 5 Netherlands 1971 Poliomyelitis
165 35 Yugoslavia 1972 1972 outbreak of smallpox in Yugoslavia Smallpox
166 1027 United States 1972–1973 London flu Influenza A virus subtype H3N2
167 24 Italy 1973 Cholera (El Tor strain)
168 15000 India 1974 1974 smallpox epidemic of India Smallpox https-//en.wikipedia.org/wiki/1974_smallpox_epidemic_in_India No Fuck no No Fuck 0 0
169 32 million+ (23.6–43.8 million) Worldwide 1981–present (data as of 2018) HIV/AIDS pandemic HIV/AIDS https-//en.wikipedia.org/wiki/Epidemiology_of_HIV/AIDS
170 64 Western Sahara 1984 Plague
171 8,410–9,432 Bangladesh 1991 Cholera
172 52 India 1994 1994 plague epidemic in Surat Plague
173 231 Worldwide 1996–2001 United Kingdom BSE outbreak vCJD
174 10000 West Africa 1996 Meningitis
175 105 Malaysia 1998–1999 1998–99 Malaysia Nipah virus outbreak Nipah virus infection
176 ca. 40+ Central America 2000 Dengue fever
177 400+ Nigeria 2001 Cholera
178 139 South Africa 2001 Cholera
179 774 Worldwide 2002–2004 2002–04 SARS outbreak Severe acute respiratory syndrome (SARS)
180 1 (18 cases) Algeria 2003 Plague
181 0 (3,958 cases) Afghanistan 2004 Leishmaniasis
182 17,000 cases; mortality typically 1% Bangladesh 2004 Cholera
183 658 Indonesia 2004 Dengue fever
184 2 Senegal 2004 Cholera
185 7 Sudan 2004 Ebola
186 14 Mali 2005 Yellow fever
187 27 Singapore 2005 2005 dengue outbreak in Singapore Dengue fever
188 1,200+ Luanda, Angola 2006 Cholera
189 61 Ituri Province, Democratic Republic of the Congo 2006 Plague
190 17 India 2006 Malaria
191 50+ India 2006 2006 dengue outbreak in India Dengue fever
192 Unknown (cases very numerous and widespread) India 2006 Chikungunya outbreaks Chikungunya virus
193 50+ Pakistan 2006 2006 dengue outbreak in Pakistan Dengue fever
194 ca. 1,000 Philippines 2006 Dengue fever
195 394 East Africa 2006 2006–07 East Africa Rift Valley fever outbreak Rift Valley fever
196 187 Democratic Republic of the Congo 2007 Mweka ebola epidemic Ebola
197 684 Ethiopia 2007 Cholera
198 49 India 2008 Cholera
199 10 Iraq 2007 2007 Iraq cholera outbreak Cholera
200 Unknown (69 cases) Nigeria 2007 Poliomyelitis
201 183 Puerto Rico; Dominican Republic; Mexico 2007 Dengue fever
202 Perhaps 1.5% of 1,200 cases (18)/ 150 in another source Somalia 2007 Cholera
203 37 Uganda 2007 Ebola
204 Vietnam 2007 Cholera
205 Brazil 2008 Dengue fever
206 Cambodia 2008 Dengue fever
207 Chad 2008 Cholera
208 China 2008–2017 Hand, foot, and mouth disease
209 18+ Madagascar 2008 Bubonic plague
210 172 Philippines 2008 Dengue fever
211 0 Vietnam 2008 Cholera
212 4293 Zimbabwe 2008–2009 2008–09 Zimbabwean cholera outbreak Cholera
213 18 Bolivia 2009 2009 Bolivian dengue fever epidemic Dengue fever
214 49 India 2009 2009 Gujarat hepatitis outbreak Hepatitis B
215 1+ (503 cases) Queensland, Australia 2009 Dengue fever
216 Worldwide 2009 Mumps outbreaks in the 2000s Mumps
217 1100 West Africa 2009–2010 2009–10 West African meningitis outbreak Meningitis
218 151,700-575,400 Worldwide 2009–2010 2009 flu pandemic (informally called "swine flu") Pandemic H1N1/09 virus Yes No No New virus; no immunity 0 New virus; 0 immunity
219 10,075 (May 2017) Hispaniola 2010–present Haiti cholera outbreak Cholera (strain serogroup O1, serotype Ogawa)
220 4,500+ Democratic Republic of the Congo 2010–2014 Measles
221 170 Vietnam 2011–present Hand, foot and mouth disease
222 350+ Pakistan 2011 2011 dengue outbreak in Pakistan Dengue fever
223 171 Darfur Sudan 2012 2012 yellow fever outbreak in Darfur, Sudan Yellow fever
224 862 (as of 13 January 2020) Worldwide 2012–present 2012 Middle East respiratory syndrome coronavirus outbreak Middle East respiratory syndrome (MERS)
225 142 Vietnam 2013–2014 Measles
226 11,300+ Worldwide, primarily concentrated in Guinea, Liberia, Sierra Leone 2013–2016 Ebola virus epidemic in West Africa Ebola virus disease Ebola virus virion Yes No Yes New virus; no immunity 1 New virus; 0 immunity
227 183 Americas 2013–2015 2013–14 chikungunya outbreak Chikungunya
228 292 Madagascar 2014–2017 2014 Madagascar plague outbreak Bubonic plague
229 36 India 2014–2015 2014 Odisha jaundice outbreak Primarily Hepatitis E, but also Hepatitis A
230 2035 India 2015 2015 Indian swine flu outbreak Influenza A virus subtype H1N1
231 ~53 Worldwide 2015–2016 2015–16 Zika virus epidemic Zika virus
232 100s (as of 1 April 2016) Angola, DR Congo, China, Kenya 2016 2016 yellow fever outbreak in Angola Yellow fever
233 3,886 (as of 30 November 2019) Yemen 2016–present 2016–20 Yemen cholera outbreak Cholera
234 1317 India 2017 2017 Gorakhpur Japanese encephalitis outbreak Japanese encephalitis
235 60,000–80,000+ United States 2017–2018 2017–18 United States flu season Seasonal influenza Unclear Yes Yes 1
236 17 India 2018 2018 Nipah virus outbreak in Kerala Nipah virus infection
237 2,271 (as of 26 April 2020) Democratic Republic of the Congo & Uganda 2018–present 2018–20 Kivu Ebola epidemic Ebola virus disease
238 6,400+ (as of April 2020) Democratic Republic of the Congo 2019–present 2019 measles outbreak in the Democratic Republic of the Congo Measles
239 83 Samoa 2019–present 2019 Samoa measles outbreak Measles
240 3,700+ Asia-Pacific, Latin America 2019–present 2019–20 dengue fever epidemic Dengue fever
241 258,354 (As of 6 May 2020) Worldwide 2019–present COVID-19 pandemic COVID-19 / SARS-CoV-2

@ -0,0 +1,22 @@
if (document.domain == "twitter.com" ){
styles = `
/* hide promoted tweets */
:has(meta[property="og:site_name"][content="Twitter"])
[data-testid="cellInnerDiv"]:has(svg + [dir="auto"]) {
display: none;
}
[data-testid^="placementTracking"] {
display: none;
}
/* hide what's happening section */
:has(meta[property="og:site_name"][content="Twitter"])
[aria-label="Timeline: Trending now"] {
display: none !important;
}
[data-testid^="sidebarColumn"] {
display: none;
}
`
}

@ -8,7 +8,7 @@
<link rel="shortcut icon" href="/favicon.ico" type="image/vnd.microsoft.icon">
% if(test -f $sitedir/_werc/pub/style.css)
% echo ' <link rel="stylesheet" href="/_werc/pub/style.css" type="text/css" media="screen" title="default">'
<link rel="alternate" type="application/rss+xml" title="RSS for Measure is Unceasing" href="/blog/index.rss" />
<meta charset="UTF-8">
% # Legacy charset declaration for backards compatibility with non-html5 browsers.
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
@ -30,7 +30,7 @@
% cat $h
%($"extraHeaders%)
<script data-isso="//comments.nunosempere.com/" src="//comments.nunosempere.com/js/embed.min.js"></script>
<script data-isso="//comments.nunosempere.com/" data-isso-max-comments-top="inf" data-isso-max-comments-nested="inf" data-isso-postbox-text-text-en="On the Internet, nobody knows you are a dog" src="//comments.nunosempere.com/js/embed.min.js"></script>
% # To add math
% # <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>

@ -1,7 +1,7 @@
## In 2019...
- [EA Mental Health Survey: Results and Analysis.](https://nunosempere.com/2019/06/13/ea-mental-health)
- [Why do social movements fail: Two concrete examples.](https://nunosempere.com/2019/10/04/social-movements)
- [Shapley values: Better than counterfactuals](https://nunosempere.com/2019/10/10/shapley-values-better-than-counterfactuals)
- [[Part 1] Amplifying generalist research via forecasting models of impact and challenges](https://nunosempere.com/2019/12/19/amplifying-general-research-via-forecasting-i)
- [[Part 2] Amplifying generalist research via forecasting results from a preliminary exploration](https://nunosempere.com/2019/12/20/amplifying-general-research-via-forecasting-ii)
- [EA Mental Health Survey: Results and Analysis.](https://nunosempere.com/blog/2019/06/13/ea-mental-health)
- [Why do social movements fail: Two concrete examples.](https://nunosempere.com/blog/2019/10/04/social-movements)
- [Shapley values: Better than counterfactuals](https://nunosempere.com/blog/2019/10/10/shapley-values-better-than-counterfactuals)
- [[Part 1] Amplifying generalist research via forecasting models of impact and challenges](https://nunosempere.com/blog/2019/12/19/amplifying-general-research-via-forecasting-i)
- [[Part 2] Amplifying generalist research via forecasting results from a preliminary exploration](https://nunosempere.com/blog/2019/12/20/amplifying-general-research-via-forecasting-ii)

@ -1,20 +1,20 @@
## In 2020...
- [A review of two free online MIT Global Poverty courses](https://nunosempere.com/2020/01/15/mit-edx-review)
- [A review of two books on survey-making](https://nunosempere.com/2020/03/01/survey-making)
- [Shapley Values II: Philantropic Coordination Theory & other miscellanea.](https://nunosempere.com/2020/03/10/shapley-values-ii)
- [New Cause Proposal: International Supply Chain Accountability](https://nunosempere.com/2020/04/01/international-supply-chain-accountability)
- [Forecasting Newsletter: April 2020](https://nunosempere.com/2020/04/30/forecasting-newsletter-2020-04)
- [Forecasting Newsletter: May 2020.](https://nunosempere.com/2020/05/31/forecasting-newsletter-2020-05)
- [Forecasting Newsletter: June 2020.](https://nunosempere.com/2020/07/01/forecasting-newsletter-2020-06)
- [Forecasting Newsletter: July 2020.](https://nunosempere.com/2020/08/01/forecasting-newsletter-2020-07)
- [Forecasting Newsletter: August 2020. ](https://nunosempere.com/2020/09/01/forecasting-newsletter-august-2020)
- [Forecasting Newsletter: September 2020. ](https://nunosempere.com/2020/10/01/forecasting-newsletter-september-2020)
- [Forecasting Newsletter: October 2020.](https://nunosempere.com/2020/11/01/forecasting-newsletter-october-2020)
- [Incentive Problems With Current Forecasting Competitions.](https://nunosempere.com/2020/11/10/incentive-problems-with-current-forecasting-competitions)
- [Announcing the Forecasting Innovation Prize](https://nunosempere.com/2020/11/15/announcing-the-forecasting-innovation-prize)
- [Predicting the Value of Small Altruistic Projects: A Proof of Concept Experiment.](https://nunosempere.com/2020/11/22/predicting-the-value-of-small-altruistic-projects-a-proof-of)
- [An experiment to evaluate the value of one researcher's work](https://nunosempere.com/2020/12/01/an-experiment-to-evaluate-the-value-of-one-researcher-s-work)
- [Forecasting Newsletter: November 2020.](https://nunosempere.com/2020/12/01/forecasting-newsletter-november-2020)
- [What are good rubrics or rubric elements to evaluate and predict impact?](https://nunosempere.com/2020/12/03/what-are-good-rubrics-or-rubric-elements-to-evaluate-and)
- [Big List of Cause Candidates](https://nunosempere.com/2020/12/25/big-list-of-cause-candidates)
- [A review of two free online MIT Global Poverty courses](https://nunosempere.com/blog/2020/01/15/mit-edx-review)
- [A review of two books on survey-making](https://nunosempere.com/blog/2020/03/01/survey-making)
- [Shapley Values II: Philantropic Coordination Theory & other miscellanea.](https://nunosempere.com/blog/2020/03/10/shapley-values-ii)
- [New Cause Proposal: International Supply Chain Accountability](https://nunosempere.com/blog/2020/04/01/international-supply-chain-accountability)
- [Forecasting Newsletter: April 2020](https://nunosempere.com/blog/2020/04/30/forecasting-newsletter-2020-04)
- [Forecasting Newsletter: May 2020.](https://nunosempere.com/blog/2020/05/31/forecasting-newsletter-2020-05)
- [Forecasting Newsletter: June 2020.](https://nunosempere.com/blog/2020/07/01/forecasting-newsletter-2020-06)
- [Forecasting Newsletter: July 2020.](https://nunosempere.com/blog/2020/08/01/forecasting-newsletter-2020-07)
- [Forecasting Newsletter: August 2020. ](https://nunosempere.com/blog/2020/09/01/forecasting-newsletter-august-2020)
- [Forecasting Newsletter: September 2020. ](https://nunosempere.com/blog/2020/10/01/forecasting-newsletter-september-2020)
- [Forecasting Newsletter: October 2020.](https://nunosempere.com/blog/2020/11/01/forecasting-newsletter-october-2020)
- [Incentive Problems With Current Forecasting Competitions.](https://nunosempere.com/blog/2020/11/10/incentive-problems-with-current-forecasting-competitions)
- [Announcing the Forecasting Innovation Prize](https://nunosempere.com/blog/2020/11/15/announcing-the-forecasting-innovation-prize)
- [Predicting the Value of Small Altruistic Projects: A Proof of Concept Experiment.](https://nunosempere.com/blog/2020/11/22/predicting-the-value-of-small-altruistic-projects-a-proof-of)
- [An experiment to evaluate the value of one researcher's work](https://nunosempere.com/blog/2020/12/01/an-experiment-to-evaluate-the-value-of-one-researcher-s-work)
- [Forecasting Newsletter: November 2020.](https://nunosempere.com/blog/2020/12/01/forecasting-newsletter-november-2020)
- [What are good rubrics or rubric elements to evaluate and predict impact?](https://nunosempere.com/blog/2020/12/03/what-are-good-rubrics-or-rubric-elements-to-evaluate-and)
- [Big List of Cause Candidates](https://nunosempere.com/blog/2020/12/25/big-list-of-cause-candidates)

@ -1,30 +1,30 @@
## In 2021...
- [Forecasting Newsletter: December 2020](https://nunosempere.com/2021/01/01/forecasting-newsletter-december-2020)
- [2020: Forecasting in Review](https://nunosempere.com/2021/01/10/2020-forecasting-in-review)
- [A Funnel for Cause Candidates](https://nunosempere.com/2021/01/13/a-funnel-for-cause-candidates)
- [Forecasting Newsletter: January 2021](https://nunosempere.com/2021/02/01/forecasting-newsletter-january-2021)
- [Forecasting Prize Results](https://nunosempere.com/2021/02/19/forecasting-prize-results)
- [Forecasting Newsletter: February 2021](https://nunosempere.com/2021/03/01/forecasting-newsletter-february-2021)
- [Introducing Metaforecast: A Forecast Aggregator and Search Tool](https://nunosempere.com/2021/03/07/introducing-metaforecast-a-forecast-aggregator-and-search)
- [Relative Impact of the First 10 EA Forum Prize Winners](https://nunosempere.com/2021/03/16/relative-impact-of-the-first-10-ea-forum-prize-winners)
- [Forecasting Newsletter: March 2021](https://nunosempere.com/2021/04/01/forecasting-newsletter-march-2021)
- [Forecasting Newsletter: April 2021](https://nunosempere.com/2021/05/01/forecasting-newsletter-april-2021)
- [Forecasting Newsletter: May 2021](https://nunosempere.com/2021/06/01/forecasting-newsletter-may-2021)
- [2018-2019 Long-Term Future Fund Grantees: How did they do?](https://nunosempere.com/2021/06/16/2018-2019-long-term-future-fund-grantees-how-did-they-do)
- [What should the norms around privacy and evaluation in the EA community be?](https://nunosempere.com/2021/06/16/what-should-the-norms-around-privacy-and-evaluation-in-the)
- [Shallow evaluations of longtermist organizations](https://nunosempere.com/2021/06/24/shallow-evaluations-of-longtermist-organizations)
- [Forecasting Newsletter: June 2021](https://nunosempere.com/2021/07/01/forecasting-newsletter-june-2021)
- [Forecasting Newsletter: July 2021](https://nunosempere.com/2021/08/01/forecasting-newsletter-july-2021)
- [Forecasting Newsletter: August 2021](https://nunosempere.com/2021/09/01/forecasting-newsletter-august-2021)
- [Frank Feedback Given To Very Junior Researchers](https://nunosempere.com/2021/09/01/frank-feedback-given-to-very-junior-researchers)
- [Building Blocks of Utility Maximization](https://nunosempere.com/2021/09/20/building-blocks-of-utility-maximization)
- [Forecasting Newsletter: September 2021.](https://nunosempere.com/2021/10/01/forecasting-newsletter-september-2021)
- [An estimate of the value of Metaculus questions](https://nunosempere.com/2021/10/22/an-estimate-of-the-value-of-metaculus-questions)
- [Forecasting Newsletter: October 2021.](https://nunosempere.com/2021/11/02/forecasting-newsletter-october-2021)
- [A Model of Patient Spending and Movement Building](https://nunosempere.com/2021/11/08/a-model-of-patient-spending-and-movement-building)
- [Simple comparison polling to create utility functions](https://nunosempere.com/2021/11/15/simple-comparison-polling-to-create-utility-functions)
- [Pathways to impact for forecasting and evaluation](https://nunosempere.com/2021/11/25/pathways-to-impact-for-forecasting-and-evaluation)
- [Forecasting Newsletter: November 2021](https://nunosempere.com/2021/12/02/forecasting-newsletter-november-2021)
- [External Evaluation of the EA Wiki](https://nunosempere.com/2021/12/13/external-evaluation-of-the-ea-wiki)
- [Prediction Markets in The Corporate Setting](https://nunosempere.com/2021/12/31/prediction-markets-in-the-corporate-setting)
- [Forecasting Newsletter: December 2020](https://nunosempere.com/blog/2021/01/01/forecasting-newsletter-december-2020)
- [2020: Forecasting in Review](https://nunosempere.com/blog/2021/01/10/2020-forecasting-in-review)
- [A Funnel for Cause Candidates](https://nunosempere.com/blog/2021/01/13/a-funnel-for-cause-candidates)
- [Forecasting Newsletter: January 2021](https://nunosempere.com/blog/2021/02/01/forecasting-newsletter-january-2021)
- [Forecasting Prize Results](https://nunosempere.com/blog/2021/02/19/forecasting-prize-results)
- [Forecasting Newsletter: February 2021](https://nunosempere.com/blog/2021/03/01/forecasting-newsletter-february-2021)
- [Introducing Metaforecast: A Forecast Aggregator and Search Tool](https://nunosempere.com/blog/2021/03/07/introducing-metaforecast-a-forecast-aggregator-and-search)
- [Relative Impact of the First 10 EA Forum Prize Winners](https://nunosempere.com/blog/2021/03/16/relative-impact-of-the-first-10-ea-forum-prize-winners)
- [Forecasting Newsletter: March 2021](https://nunosempere.com/blog/2021/04/01/forecasting-newsletter-march-2021)
- [Forecasting Newsletter: April 2021](https://nunosempere.com/blog/2021/05/01/forecasting-newsletter-april-2021)
- [Forecasting Newsletter: May 2021](https://nunosempere.com/blog/2021/06/01/forecasting-newsletter-may-2021)
- [2018-2019 Long-Term Future Fund Grantees: How did they do?](https://nunosempere.com/blog/2021/06/16/2018-2019-long-term-future-fund-grantees-how-did-they-do)
- [What should the norms around privacy and evaluation in the EA community be?](https://nunosempere.com/blog/2021/06/16/what-should-the-norms-around-privacy-and-evaluation-in-the)
- [Shallow evaluations of longtermist organizations](https://nunosempere.com/blog/2021/06/24/shallow-evaluations-of-longtermist-organizations)
- [Forecasting Newsletter: June 2021](https://nunosempere.com/blog/2021/07/01/forecasting-newsletter-june-2021)
- [Forecasting Newsletter: July 2021](https://nunosempere.com/blog/2021/08/01/forecasting-newsletter-july-2021)
- [Forecasting Newsletter: August 2021](https://nunosempere.com/blog/2021/09/01/forecasting-newsletter-august-2021)
- [Frank Feedback Given To Very Junior Researchers](https://nunosempere.com/blog/2021/09/01/frank-feedback-given-to-very-junior-researchers)
- [Building Blocks of Utility Maximization](https://nunosempere.com/blog/2021/09/20/building-blocks-of-utility-maximization)
- [Forecasting Newsletter: September 2021.](https://nunosempere.com/blog/2021/10/01/forecasting-newsletter-september-2021)
- [An estimate of the value of Metaculus questions](https://nunosempere.com/blog/2021/10/22/an-estimate-of-the-value-of-metaculus-questions)
- [Forecasting Newsletter: October 2021.](https://nunosempere.com/blog/2021/11/02/forecasting-newsletter-october-2021)
- [A Model of Patient Spending and Movement Building](https://nunosempere.com/blog/2021/11/08/a-model-of-patient-spending-and-movement-building)
- [Simple comparison polling to create utility functions](https://nunosempere.com/blog/2021/11/15/simple-comparison-polling-to-create-utility-functions)
- [Pathways to impact for forecasting and evaluation](https://nunosempere.com/blog/2021/11/25/pathways-to-impact-for-forecasting-and-evaluation)
- [Forecasting Newsletter: November 2021](https://nunosempere.com/blog/2021/12/02/forecasting-newsletter-november-2021)
- [External Evaluation of the EA Wiki](https://nunosempere.com/blog/2021/12/13/external-evaluation-of-the-ea-wiki)
- [Prediction Markets in The Corporate Setting](https://nunosempere.com/blog/2021/12/31/prediction-markets-in-the-corporate-setting)

@ -1,59 +1,59 @@
## In 2022...
- [Forecasting Newsletter: December 2021](https://nunosempere.com/2022/01/10/forecasting-newsletter-december-2021)
- [Forecasting Newsletter: Looking back at 2021.](https://nunosempere.com/2022/01/27/forecasting-newsletter-looking-back-at-2021)
- [Forecasting Newsletter: January 2022](https://nunosempere.com/2022/02/03/forecasting-newsletter-january-2022)
- [Splitting the timeline as an extinction risk intervention](https://nunosempere.com/2022/02/06/splitting-the-timeline-as-an-extinction-risk-intervention)
- [We are giving $10k as forecasting micro-grants](https://nunosempere.com/2022/02/08/we-are-giving-usd10k-as-forecasting-micro-grants)
- [Five steps for quantifying speculative interventions](https://nunosempere.com/2022/02/18/five-steps-for-quantifying-speculative-interventions)
- [Forecasting Newsletter: February 2022](https://nunosempere.com/2022/03/05/forecasting-newsletter-february-2022)
- [Samotsvety Nuclear Risk Forecasts — March 2022](https://nunosempere.com/2022/03/10/samotsvety-nuclear-risk-forecasts-march-2022)
- [Valuing research works by eliciting comparisons from EA researchers](https://nunosempere.com/2022/03/17/valuing-research-works-by-eliciting-comparisons-from-ea)
- [Forecasting Newsletter: April 2222](https://nunosempere.com/2022/04/01/forecasting-newsletter-april-2222)
- [Forecasting Newsletter: March 2022](https://nunosempere.com/2022/04/05/forecasting-newsletter-march-2022)
- [A quick note on the value of donations](https://nunosempere.com/2022/04/06/note-donations)
- [Open Philanthopy allocation by cause area](https://nunosempere.com/2022/04/07/openphil-allocation)
- [Better scoring rules](https://nunosempere.com/2022/04/16/optimal-scoring)
- [Simple Squiggle](https://nunosempere.com/2022/04/17/simple-squiggle)
- [EA Forum Lowdown: April 2022](https://nunosempere.com/2022/05/01/ea-forum-lowdown-april-2022)
- [Forecasting Newsletter: April 2022](https://nunosempere.com/2022/05/10/forecasting-newsletter-april-2022)
- [Infinite Ethics 101: Stochastic and Statewise Dominance as a Backup Decision Theory when Expected Values Fail](https://nunosempere.com/2022/05/20/infinite-ethics-101)
- [Forecasting Newsletter: May 2022](https://nunosempere.com/2022/06/03/forecasting-newsletter-may-2022)
- [The Tragedy of Calisto and Melibea](https://nunosempere.com/2022/06/14/the-tragedy-of-calisto-and-melibea)
- [A Critical Review of Open Philanthropys Bet On Criminal Justice Reform](https://nunosempere.com/2022/06/16/criminal-justice)
- [Cancellation insurance](https://nunosempere.com/2022/07/04/cancellation-insurance)
- [I will bet on your success on Manifold Markets](https://nunosempere.com/2022/07/05/i-will-bet-on-your-success-or-failure)
- [The Maximum Vindictiveness Strategy](https://nunosempere.com/2022/07/09/maximum-vindictiveness-strategy)
- [Forecasting Newsletter: June 2022](https://nunosempere.com/2022/07/12/forecasting-newsletter-june-2022)
- [Some thoughts on Turing.jl](https://nunosempere.com/2022/07/23/thoughts-on-turing-julia)
- [How much would I have to run to lose 20 kilograms?](https://nunosempere.com/2022/07/27/how-much-to-run-to-lose-20-kilograms)
- [$1,000 Squiggle Experimentation Challenge](https://nunosempere.com/2022/08/04/usd1-000-squiggle-experimentation-challenge)
- [Forecasting Newsletter: July 2022](https://nunosempere.com/2022/08/08/forecasting-newsletter-july-2022)
- [A concern about the "evolutionary anchor" of Ajeya Cotra's report](https://nunosempere.com/2022/08/10/evolutionary-anchor)
- [What do Americans think 'cutlery' means?](https://nunosempere.com/2022/08/18/what-do-americans-mean-by-cutlery)
- [Introduction to Fermi estimates](https://nunosempere.com/2022/08/20/fermi-introduction)
- [A comment on Cox's theorem and probabilistic inductivism.](https://nunosempere.com/2022/08/31/on-cox-s-theorem-and-probabilistic-induction)
- [Simple estimation examples in Squiggle](https://nunosempere.com/2022/09/02/simple-estimation-examples-in-squiggle)
- [Forecasting Newsletter: August 2022.](https://nunosempere.com/2022/09/10/forecasting-newsletter-august-2022)
- [Distribution of salaries in Spain](https://nunosempere.com/2022/09/11/salary-ranges-spain)
- [An experiment eliciting relative estimates for Open Philanthropys 2018 AI safety grants](https://nunosempere.com/2022/09/12/an-experiment-eliciting-relative-estimates-for-open)
- [Use distributions to more parsimoniously estimate impact](https://nunosempere.com/2022/09/15/use-distributions-to-more-parsimoniously-estimate-impact)
- [Utilitarianism: An Incomplete Approach](https://nunosempere.com/2022/09/19/utilitarianism-an-incomplete-approach)
- [$5k challenge to quantify the impact of 80,000 hours' top career paths](https://nunosempere.com/2022/09/23/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top)
- [Use a less coarse analysis of AMF beneficiary age and consider counterfactual deaths](https://nunosempere.com/2022/09/28/granular-AMF)
- [Samotsvety Nuclear Risk update October 2022](https://nunosempere.com/2022/10/03/samotsvety-nuclear-risk-update-october-2022)
- [Five slightly more hardcore Squiggle models.](https://nunosempere.com/2022/10/10/five-slightly-more-hardcore-squiggle-models)
- [Forecasting Newsletter: September 2022.](https://nunosempere.com/2022/10/12/forecasting-newsletter-september-2022)
- [Sometimes you give to the commons, and sometimes you take from the commons](https://nunosempere.com/2022/10/17/the-commons)
- [Brief evaluations of top-10 billionnaires](https://nunosempere.com/2022/10/21/brief-evaluations-of-top-10-billionnaires)
- [Are flimsy evaluations worth it?](https://nunosempere.com/2022/10/27/are-flimsy-evaluations-worth-it)
- [Brief thoughts on my personal research strategy](https://nunosempere.com/2022/10/31/brief-thoughts-personal-strategy)
- [Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes.](https://nunosempere.com/2022/11/04/metaforecast-late-2022-update)
- [Tracking the money flows in forecasting](https://nunosempere.com/2022/11/06/forecasting-money-flows)
- [Forecasting Newsletter for October 2022](https://nunosempere.com/2022/11/15/forecasting-newsletter-for-october-2022)
- [Some data on the stock of EA™ funding ](https://nunosempere.com/2022/11/20/brief-update-ea-funding)
- [List of past fraudsters similar to SBF](https://nunosempere.com/2022/11/28/list-of-past-fraudsters-similar-to-sbf)
- [Goodhart's law and aligning politics with human flourishing ](https://nunosempere.com/2022/12/05/goodhart-politics)
- [COVID-19 in rural Balochistan, Pakistan: Two interviews from May 2020](https://nunosempere.com/2022/12/16/covid-19-in-rural-balochistan-pakistan-two-interviews-from-1)
- [Hacking on rose](https://nunosempere.com/2022/12/20/hacking-on-rose)
- [A basic argument for AI risk](https://nunosempere.com/2022/12/23/ai-risk-rohin-shah)
- [Forecasting Newsletter: December 2021](https://nunosempere.com/blog/2022/01/10/forecasting-newsletter-december-2021)
- [Forecasting Newsletter: Looking back at 2021.](https://nunosempere.com/blog/2022/01/27/forecasting-newsletter-looking-back-at-2021)
- [Forecasting Newsletter: January 2022](https://nunosempere.com/blog/2022/02/03/forecasting-newsletter-january-2022)
- [Splitting the timeline as an extinction risk intervention](https://nunosempere.com/blog/2022/02/06/splitting-the-timeline-as-an-extinction-risk-intervention)
- [We are giving $10k as forecasting micro-grants](https://nunosempere.com/blog/2022/02/08/we-are-giving-usd10k-as-forecasting-micro-grants)
- [Five steps for quantifying speculative interventions](https://nunosempere.com/blog/2022/02/18/five-steps-for-quantifying-speculative-interventions)
- [Forecasting Newsletter: February 2022](https://nunosempere.com/blog/2022/03/05/forecasting-newsletter-february-2022)
- [Samotsvety Nuclear Risk Forecasts — March 2022](https://nunosempere.com/blog/2022/03/10/samotsvety-nuclear-risk-forecasts-march-2022)
- [Valuing research works by eliciting comparisons from EA researchers](https://nunosempere.com/blog/2022/03/17/valuing-research-works-by-eliciting-comparisons-from-ea)
- [Forecasting Newsletter: April 2222](https://nunosempere.com/blog/2022/04/01/forecasting-newsletter-april-2222)
- [Forecasting Newsletter: March 2022](https://nunosempere.com/blog/2022/04/05/forecasting-newsletter-march-2022)
- [A quick note on the value of donations](https://nunosempere.com/blog/2022/04/06/note-donations)
- [Open Philanthopy allocation by cause area](https://nunosempere.com/blog/2022/04/07/openphil-allocation)
- [Better scoring rules](https://nunosempere.com/blog/2022/04/16/optimal-scoring)
- [Simple Squiggle](https://nunosempere.com/blog/2022/04/17/simple-squiggle)
- [EA Forum Lowdown: April 2022](https://nunosempere.com/blog/2022/05/01/ea-forum-lowdown-april-2022)
- [Forecasting Newsletter: April 2022](https://nunosempere.com/blog/2022/05/10/forecasting-newsletter-april-2022)
- [Infinite Ethics 101: Stochastic and Statewise Dominance as a Backup Decision Theory when Expected Values Fail](https://nunosempere.com/blog/2022/05/20/infinite-ethics-101)
- [Forecasting Newsletter: May 2022](https://nunosempere.com/blog/2022/06/03/forecasting-newsletter-may-2022)
- [The Tragedy of Calisto and Melibea](https://nunosempere.com/blog/2022/06/14/the-tragedy-of-calisto-and-melibea)
- [A Critical Review of Open Philanthropys Bet On Criminal Justice Reform](https://nunosempere.com/blog/2022/06/16/criminal-justice)
- [Cancellation insurance](https://nunosempere.com/blog/2022/07/04/cancellation-insurance)
- [I will bet on your success on Manifold Markets](https://nunosempere.com/blog/2022/07/05/i-will-bet-on-your-success-or-failure)
- [The Maximum Vindictiveness Strategy](https://nunosempere.com/blog/2022/07/09/maximum-vindictiveness-strategy)
- [Forecasting Newsletter: June 2022](https://nunosempere.com/blog/2022/07/12/forecasting-newsletter-june-2022)
- [Some thoughts on Turing.jl](https://nunosempere.com/blog/2022/07/23/thoughts-on-turing-julia)
- [How much would I have to run to lose 20 kilograms?](https://nunosempere.com/blog/2022/07/27/how-much-to-run-to-lose-20-kilograms)
- [$1,000 Squiggle Experimentation Challenge](https://nunosempere.com/blog/2022/08/04/usd1-000-squiggle-experimentation-challenge)
- [Forecasting Newsletter: July 2022](https://nunosempere.com/blog/2022/08/08/forecasting-newsletter-july-2022)
- [A concern about the "evolutionary anchor" of Ajeya Cotra's report](https://nunosempere.com/blog/2022/08/10/evolutionary-anchor)
- [What do Americans think 'cutlery' means?](https://nunosempere.com/blog/2022/08/18/what-do-americans-mean-by-cutlery)
- [Introduction to Fermi estimates](https://nunosempere.com/blog/2022/08/20/fermi-introduction)
- [A comment on Cox's theorem and probabilistic inductivism.](https://nunosempere.com/blog/2022/08/31/on-cox-s-theorem-and-probabilistic-induction)
- [Simple estimation examples in Squiggle](https://nunosempere.com/blog/2022/09/02/simple-estimation-examples-in-squiggle)
- [Forecasting Newsletter: August 2022.](https://nunosempere.com/blog/2022/09/10/forecasting-newsletter-august-2022)
- [Distribution of salaries in Spain](https://nunosempere.com/blog/2022/09/11/salary-ranges-spain)
- [An experiment eliciting relative estimates for Open Philanthropys 2018 AI safety grants](https://nunosempere.com/blog/2022/09/12/an-experiment-eliciting-relative-estimates-for-open)
- [Use distributions to more parsimoniously estimate impact](https://nunosempere.com/blog/2022/09/15/use-distributions-to-more-parsimoniously-estimate-impact)
- [Utilitarianism: An Incomplete Approach](https://nunosempere.com/blog/2022/09/19/utilitarianism-an-incomplete-approach)
- [$5k challenge to quantify the impact of 80,000 hours' top career paths](https://nunosempere.com/blog/2022/09/23/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top)
- [Use a less coarse analysis of AMF beneficiary age and consider counterfactual deaths](https://nunosempere.com/blog/2022/09/28/granular-AMF)
- [Samotsvety Nuclear Risk update October 2022](https://nunosempere.com/blog/2022/10/03/samotsvety-nuclear-risk-update-october-2022)
- [Five slightly more hardcore Squiggle models.](https://nunosempere.com/blog/2022/10/10/five-slightly-more-hardcore-squiggle-models)
- [Forecasting Newsletter: September 2022.](https://nunosempere.com/blog/2022/10/12/forecasting-newsletter-september-2022)
- [Sometimes you give to the commons, and sometimes you take from the commons](https://nunosempere.com/blog/2022/10/17/the-commons)
- [Brief evaluations of top-10 billionnaires](https://nunosempere.com/blog/2022/10/21/brief-evaluations-of-top-10-billionnaires)
- [Are flimsy evaluations worth it?](https://nunosempere.com/blog/2022/10/27/are-flimsy-evaluations-worth-it)
- [Brief thoughts on my personal research strategy](https://nunosempere.com/blog/2022/10/31/brief-thoughts-personal-strategy)
- [Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes.](https://nunosempere.com/blog/2022/11/04/metaforecast-late-2022-update)
- [Tracking the money flows in forecasting](https://nunosempere.com/blog/2022/11/06/forecasting-money-flows)
- [Forecasting Newsletter for October 2022](https://nunosempere.com/blog/2022/11/15/forecasting-newsletter-for-october-2022)
- [Some data on the stock of EA™ funding ](https://nunosempere.com/blog/2022/11/20/brief-update-ea-funding)
- [List of past fraudsters similar to SBF](https://nunosempere.com/blog/2022/11/28/list-of-past-fraudsters-similar-to-sbf)
- [Goodhart's law and aligning politics with human flourishing ](https://nunosempere.com/blog/2022/12/05/goodhart-politics)
- [COVID-19 in rural Balochistan, Pakistan: Two interviews from May 2020](https://nunosempere.com/blog/2022/12/16/covid-19-in-rural-balochistan-pakistan-two-interviews-from-1)
- [Hacking on rose](https://nunosempere.com/blog/2022/12/20/hacking-on-rose)
- [A basic argument for AI risk](https://nunosempere.com/blog/2022/12/23/ai-risk-rohin-shah)

@ -78,9 +78,7 @@ Just-in-time Bayesianism would explain this as follows.
\end{cases}
\]
The second estimate is the estimate produced by [Laplace's law](https://en.wikipedia.org/wiki/Rule_of_succession)---an instance of Bayesian reasoning given an ignorance prior---given one "success" (a dog biting a human) and \(n\) "failures" (a dog not biting a human).
Now, because the first hypothesis assigns very low probability to what the man has experienced, a whole bunch of the probability goes to the second hypothesis. Note that the prior degree of credence to assign to this second hypothesis *isn't* governed by Bayes' law, and so one can't do a straightforward Bayesian update.
The second estimate is the estimate produced by [Laplace's law](https://en.wikipedia.org/wiki/Rule_of_succession)---an instance of Bayesian reasoning given an ignorance prior---given one "success" (a dog biting a human) and \(n\) "failures" (a dog not biting a human). <p>Now, because the first hypothesis assigns very low probability to what the man has experienced, a whole bunch of the probability goes to the second hypothesis. Note that the prior degree of credence to assign to this second hypothesis *isn't* governed by Bayes' law, and so one can't do a straightforward Bayesian update.</p>
But now, with more and more encounters, the probability assigned by the second hypothesis, will be as \(\frac{2}{n+2}\), where \(n\) is the number of times the man interacts with a dog. But this goes down very slowly:

@ -19,21 +19,22 @@ Suppose that you order the ex-ante values of grants in different cause areas. Th
For simplicity, let us just pick the case where there are two cause areas:
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-1.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-1.png" class='.img-medium-center' style="width: 50%;" >
More undilluted shades represent more valuable grants (e.g., larger reductions per dollar: of human suffering, animal suffering or existential risk), and lighter shades represent less valuable grants. Due to diminishing marginal returns, I've drawn the most valuable grants as smaller, though this doesn't particularly matter.
Now, we can augment the picture by also considering the marginal grants which didn't get funded.
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-2.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-2.png" style="width: 70%;">
In particular, imagine that the marginal grant which didn't get funded for cause #1 has the same size as the marginal grant that did get funded for cause #2 (this doesn't affect the thrust of the argument, it just makes it more apparent):
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-3.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/values-3.png" style="width: 70%;">
Now, from this, we can deduce some bounds on relative values:
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-1.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-1.png" style="width: 70%;">
In words rather than in shades of colour, this would be:
@ -47,7 +48,7 @@ Or, dividing by L1 and L2,
In colors, this would correspond to all four squares having the same size:
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-2-black-border.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-2-black-border.png" class='.img-medium-center' style="width: 20%;" >
Giving some values, this could be:
@ -60,22 +61,25 @@ From this we could deduce that 6 reds > 10 greens > 2 reds, or that one green is
But the above was for one year. Now comes another year, with its own set of grants. But we are keeping the amount we allocate to each area constant.
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/year-1.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/year-1.png" style="width: 100%;">
It's been a less promising year for green, and a more promising year for red, . So this means that some of the stuff that wasn't funded last year for green is funded now, and some of the stuff that was funded last year for red isn't funded now:
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/year-2.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/year-2.png" style="width: 100%;">
Now we can do the same comparisons as the last time:
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-year-2.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-year-2.png" style="width: 20%;"><br>
And when we compare them against the previous year
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-both-years.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-both-years.png" style="width: 40%;">
we notice that there is an inconsistency.
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-contradiction.png" class='.img-medium-center'>
<img src="https://images.nunosempere.com/blog/2023/04/25/worldview-diversification/comparison-contradiction.png" style="width: 50%;">
### Why is the above a problem

@ -46,7 +46,7 @@ As for suggestions, my main one would be to add more analytical clarity, and to
The forecast is conditional on scaling laws continuing as they have, and on the various analytical assumptions not introducing too much error. And it's not a forecast of when models will be transformative, but of an upper bound, because as we mentioned at the beginning, indistinguishability is a sufficient but not a necessary condition for transformative AI. The authors point this out at the beginning, but I think this could be pointed out more obviously.
The text also generally needs an editor (e.g., use the first person plural, as there are two authors). As I was reading it, I felt the compulsion to rewrite it in better prose. But I didn't think that it was worth it for me to do that, or to point out style mistakes—besides my wish for greater clarity—because you can just hire an editor for this. And also, an alert reader should be able to extract the core of what you are saying even though prose could be improved. I did write down some impressions as I was reading in a different document, though.
The text also generally needs an editor (e.g., use the first person plural, as there are two authors). As I was reading it, I felt the compulsion to rewrite it in better prose. But I did not think that it was worth it for me to do that, or to point out style mistakes—besides my wish for greater clarity—because you can just hire an editor for this. And also, an alert reader should be able to extract the core of what you are saying even though prose could be improved. I did write down some impressions as I was reading in a different document, though.
Overall I liked it, and would recommend that it be published. It's the kind of thing that, even if one thinks that it is not enough for a forecast on its own, seems like it would be a valuable input into other forecasts.

@ -29,11 +29,11 @@ But one could also imagine a society of extremely capable humans that were all u
To some extent, classic science-fiction provides templates of lives lived more like the second paragraph, rather than like the first. My personal understanding of the concept of privilege, in the social justice sense, has an element of richer kids having access to better strategies to imitate: learning English, creating one's own business, making friends who are productive in a capitalist society, and so on. But this still falls very much short of peak awesomeness.
This might suggests that it could be of value to create and promote literature and media that gives sketches of strategies and life trajectories that could be fruitfully imitated, that are better than existing life templates. One such piece might be [A Message to Garcia](https://courses.csail.mit.edu/6.803/pdf/hubbard1899.pdf).
This might suggest that it could be of value to create and promote literature and media that gives sketches of strategies and life trajectories that could be fruitfully imitated, that are better than existing life templates. One such piece might be [A Message to Garcia](https://courses.csail.mit.edu/6.803/pdf/hubbard1899.pdf).
Possible next steps:
1. Anayyze Effective Altruism, its goals and actions, through the lense of Girard's theory of mimesis.
1. Analyze Effective Altruism, its goals and actions, through the lens of Girard's theory of mimesis.
2. Greatly increase the production of beneficial memes. Attempt to narrate a better humanity into existence.
3. Make bibliographies and life histories of great people more widely available.
4. Interview a few of the most formidable people you can get your hands on.
@ -42,11 +42,11 @@ Possible next steps:
### 1.2. Role models
I can pinpoint the people who had the most influence on my proffessional career. They are a bit older than me, and I had good enough models of their work that I could at times steal some of their strategies.
I can pinpoint the people who had the most influence on my professional career. They are a bit older than me, and I had good enough models of their work that I could at times steal some of their strategies.
Some of the strategies I've copied are: explore the rationality community, get into EA, become an independent researcher, go to a summer fellowship at the Future of Humanity Institute, work on forecasting research, or incorporate programming into research. And so at a conference earlier last year, it turned out that I was not the only Spanish effective altruist forecasting researcher/programmer with a beard. But I'm more disagreeable, less social, and less into AI than both Jacob and Jaime, and don't work on the same sub-field, which means that I can't implement quite the same strategies. And so as time goes on, it becomes harder for me to find mentors or role models, i.e., people I know well enough and are similar enough that I can mimic their strategies.
In general, seems plausible that people's role models could have a large influence on their life trajectories. So perhaps we could make people better, harder, faster, stronger, by bringing better role models to their attention, or by more directly matching them with better mentors.
In general, it seems plausible that people's role models could have a large influence on their life trajectories. So perhaps we could make people better, harder, faster, stronger, by bringing better role models to their attention, or by more directly matching them with better mentors.
I think that substantial efforts could be fruitfully spent in this area.
@ -56,7 +56,7 @@ Possible next steps:
### 1.3. A taboo against strong optimizers
There is a thread which sees extraordinary people described as alien and monstruous. The foremost Spanish playwright was described as a ["monster of nature"](https://en.wikipedia.org/wiki/Lope_de_Vega). The genius Hungarian mathematicians of the 20th century are referred to as [the Martians](<https://en.wikipedia.org/wiki/The_Martians_(scientists)>). Super-geniuses in fiction are often cast as super-villains. We also observe a bunch of social mechanisms that ensure that humans remain mostly mediocre: these have been satirized under the [Law of Jante](https://en.wikipedia.org/wiki/Law_of_Jante). Within EA, Owen Cotton-Barratt (since disgraced) and Holden Karnofsky have been writting about [the](https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous) [pitfalls](https://forum.effectivealtruism.org/posts/5o3vttALksQQQiqkv/consequentialists-in-society-should-self-modify-to-have-side) of [strong](https://forum.effectivealtruism.org/posts/yJBhuCjqvJyd7kW7a/perils-of-optimizing-in-social-contexts) [optimization](https://forum.effectivealtruism.org/posts/cDv9mP5Yoz4k4fSbW/don-t-over-optimize-things).
There is a thread which sees extraordinary people described as alien and monstrous. The foremost Spanish playwright was described as a ["monster of nature"](https://en.wikipedia.org/wiki/Lope_de_Vega). The genius Hungarian mathematicians of the 20th century are referred to as [the Martians](<https://en.wikipedia.org/wiki/The_Martians_(scientists)>). Super-geniuses in fiction are often cast as super-villains. We also observe a bunch of social mechanisms that ensure that humans remain mostly mediocre: these have been satirized under the [Law of Jante](https://en.wikipedia.org/wiki/Law_of_Jante). Within EA, Owen Cotton-Barratt (since disgraced) and Holden Karnofsky have been writing about [the](https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous) [pitfalls](https://forum.effectivealtruism.org/posts/5o3vttALksQQQiqkv/consequentialists-in-society-should-self-modify-to-have-side) of [strong](https://forum.effectivealtruism.org/posts/yJBhuCjqvJyd7kW7a/perils-of-optimizing-in-social-contexts) [optimization](https://forum.effectivealtruism.org/posts/cDv9mP5Yoz4k4fSbW/don-t-over-optimize-things).
My sense is that there is something about intensity, single-mindedness, agency and power that people at times find scary and at times deeply attractive in other humans. What they don't find as scary is a boy-scout in tights with super-force, and so we get a Superman who doesn't topple suboptimal nations and instead spends half his time live-action role-playing as a journalist. Oh the waste.
@ -64,19 +64,19 @@ I think that this fear of powerful humans is more a symptom than a cause. Still,
It's also possible that the fear of powerful humans was pretty much justified in the past, in which case I would instead want to see better ways to align powerful humans, or some thinking about whether powerful humans in our current environment do in expectation more harm than good.
One story one could tell is that in previous century powerful people mostly were so because they had power directly over their servants, slaves and serfs—but that in the current capitalist market economy, powerful humans mostly are so because they have produced great value. But looking at, for instance, Bill Gates, I actually don't know whether Microsoft has done more harm than good compared to what would otherwise have happened.
One story one could tell is that in previous centuries powerful people mostly were so because they had power directly over their servants, slaves and serfs—but that in the current capitalist market economy, powerful humans mostly are so because they have produced great value. But looking at, for instance, Bill Gates, I actually don't know whether Microsoft has done more harm than good compared to what would otherwise have happened.
There are other possible stories in this vein, e.g., maybe trying to become more formidable requires removing some type of load-bearing constraint, but then once that is removed, people fall into existential dread or into psychopathy. I mostly don't buy it though.
Possible next steps:
- Figure out whether powerful humans do more good or more harm in our current environment
- If they powerful humans do more good than harm, work to change the perceptions of power so that becoming greatly powerful seems more desirable, and work on becoming greatly more powerful ourselves.
- If the powerful humans do more good than harm, work to change the perceptions of power so that becoming greatly powerful seems more desirable, and work on becoming greatly more powerful ourselves.
- If they do more harm than good in general, figure whether we in particular want to be more powerful, or more meek.
## 2. Value tradeoffs
There are many things humans can want besides being really formidable, like niceness, welcomingness, humility, status, tranquility, stability, job security or comfort. This might a legitimate value difference. But sometimes humans do mypically go down the path of lower formidability in a way which will sabotage their future ambitions.
There are many things humans can want besides being really formidable, like niceness, welcomingness, humility, status, tranquility, stability, job security or comfort. This might be a legitimate value difference. But sometimes humans do myopically go down the path of lower formidability in a way which will sabotage their future ambitions.
### 2.1. For want on an asshole, shit accumulated: Frank feedback probably an underprovided public good.
@ -101,22 +101,22 @@ Here are a few dynamics I've noticed around criticism, ordered as bullet points:
2. Recipients of criticism often don't care about its content.
3. Recipients of criticism often perceive it as an attack.
4. If the target of criticism can choose what standard of criticism is acceptable, they will tend to choose very high standards.
5. Targets of criticism can insulate themselves against criticism by pointing out that its form or shape—as oppposed to its content—is flawed
5. Targets of criticism can insulate themselves against criticism by pointing out that its form or shape—as opposed to its content—is flawed
6. Producing criticism which deals with the above is expensive.
7. In practice, producing criticism ends up being expensive.
8. People also fear appearing disagreeable.
9. In practice, "doing criticism well" has a strong component of acknowledging and appeasing power.
8. Overall, this results in a suboptimal amount of criticism being produced, in contrast with the socially optimal rate.
9. A solution to this is [Crocker's Rules](http://sl4rg/crocker.html): Publizicing one's willingness to receive more flawed criticism.
10. At an institutional level, Crocker's rules would have to be adopted by those ultimately resposible for a given thing.
11. Many times, there isn't someone who is ultimately responsable for things. Though sometimes there is a CEO or a board of directors.
9. A solution to this is [Crocker's Rules](http://sl4.org/crocker.html): Publicizing one's willingness to receive more flawed criticism.
10. At an institutional level, Crocker's rules would have to be adopted by those ultimately responsible for a given thing.
11. Many times, there isn't someone who is ultimately responsible for things. Though sometimes there is a CEO or a board of directors.
12. When Crocker's rules are adopted, malicious, or merely status-seeking actors could exploit them to tarnish reputations or to quickly raise their status at the expense of others.
13. In practice, my sense is that the balance of things still points towards Crocker's rules being beneficial
13. In practice, my sense is that the balance of things still points towards Crocker's rules being beneficial.
14. While people who care should adopt Crocker's rules, this isn't enough to deal with all the bullshit, and so more steps are needed.
### 2.2. The "hardcore optimizer" hypothesis
Here is something which I've been thinking about as the "hardocore optimizer hypothesis":
Here is something which I've been thinking about as the "hardcore optimizer hypothesis":
> An actor which is under no constraints is infinitely more powerful than one neutered by many restrictions
@ -124,11 +124,11 @@ Here are some constraints that I think that Open Philanthropy (a large foundatio
- Legality: They are choosing not to take actions which would be illegal according to the US law.
- The Overton Window: They are choosing for their actions to not be "beyond the pale" according to the 21st century American lefty milieu.
- Responsability and loyalty towards their own staff: They are choosing to keep their staff around from year to year, rather than aggressively restructuring.
- Responsability and loyalty towards past grantees: Open Philanthropy will choose to exit a cause area gracefully, rather than leaving it at once.
- Responsibility and loyalty towards their own staff: They are choosing to keep their staff around from year to year, rather than aggressively restructuring.
- Responsibility and loyalty towards past grantees: Open Philanthropy will choose to exit a cause area gracefully, rather than leaving it at once.
- The "optimization is dangerous" constraint: They are choosing to not go full-steam on any one perspective, but rather to proceed cautiously and hedge their bets.
- Bounded trust: Open Philanthropy takes long to trust people, which means that they have limited staff which has limited attention and is often busy.
- Explainability: Open Philanthropy employees lower down the totem pole probably can't take actions or recommend grants that their superiors can't understand
- Explainability: Open Philanthropy employees lower down the totem pole probably can't take actions or recommend grants that their superiors can't understand.
- Do not look like assholes: Open Philanthropy generally wants to appear to be nice people.
And here are a few actions that I think are unavailable to Open Philanthropy because of the above constraints:
@ -137,10 +137,10 @@ And here are a few actions that I think are unavailable to Open Philanthropy bec
- The Overton Window: Peter Thiel supported Trump in the Republican primary, and thereafter got some amount of influence in the beginning of the Trump administration.
- Responsability and loyalty towards their own staff: This cuts off the option to have a Bridgewater-style churn, where you rate people intensely and then fire the bottom ¿10%? of the population every year.
- Responsability and loyalty towards past grantees: This increases the cost of entering a new area, and thus cuts off the possibility of exploring many areas at once with fewer strings attached.
- The "optimization is dangerous" constraint: Personally I would love to see investment in advanced forecasting and evaluation systems that would seek to equalize the values of marginal grants across cause ares. But this doesn't jibe with their "wordview diversification" approach, and so it isn't happening, or is happening slowly rather than having happened fast already.
- Bounded trust: Open Philanthropy isn't willing to have a regranting sytem, like that of the FTX Future Fund. Their forecasting grantmaking in the past has also been sluggish and improvable.
- The "optimization is dangerous" constraint: Personally I would love to see investment in advanced forecasting and evaluation systems that would seek to equalize the values of marginal grants across cause areas. But this doesn't jibe with their "worldview diversification" approach, and so it isn't happening, or is happening slowly rather than having happened fast already.
- Bounded trust: Open Philanthropy isn't willing to have a regranting system, like that of the FTX Future Fund. Their forecasting grantmaking in the past has also been sluggish and improvable.
- Explainability: Hard to give an example here, since I don't really have that much insight into their inner workings.
- Do not look like assholes: This restricts OpenPhilanthropy's ability to call out people on bullshit, offer large monetary bets, or generally disagree with the Democratic party.
- Do not look like assholes: This restricts Open Philanthropy's ability to call out people on bullshit, offer large monetary bets, or generally disagree with the Democratic party.
And so by the time you are bound by half of those constraints, you might end up moving slowly and suboptimally. Perhaps this explains how come Open Philanthropy donated to Hypermind—a forecasting platform which I know for having a really terrible UX which didn't allow for wide enough distributions, and using an [aggregation method which ignored probabilities below 5%](https://docs.google.com/document/d/1fRg7twB2RLAc-Ey8NUj5qFUJCg-dp3yb/edit) for [predicting the state of AI in 2030](https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=AI2030). Instead, they could have invested in the early iteration of Manifold Markets, an innovative venture by former Google engineers with an UX infinitely superior to that of Hypermind.
@ -150,7 +150,7 @@ Hypermind isn't 60% as valuable as Manifold, it's maybe 0.1% to 2%. I think that
So cultural templates discourage us from being more formidable, and then value tradeoffs and self-imposed constraints diminish our ability to influence the world. My primary suggestion is to not trade formidability and truth-seeking for other values, like comfort and social harmony—or at least to make that tradeoff consciously and sparingly.
I haven't yet considered leaders. Leaders can inspire, coordinate and direct a movement. And yet sometimes we don't get the leaders we need, but have to work with those we have right now. It's not clear to me how to discuss the topic tactfully, without character-asssassinating anyone. So I'm leaving the topic alone for now.
I haven't yet considered leaders. Leaders can inspire, coordinate and direct a movement. And yet sometimes we don't get the leaders we need, but have to work with those we have right now. It's not clear to me how to discuss the topic tactfully, without character-assassinating anyone. So I'm leaving the topic alone for now.
In the meantime, the hypotheses that I've covered don't seem exhaustive. So I'm really curious about readers' own thoughts. Comments are open.

@ -21,21 +21,21 @@ squiggle.c
You can follow some example usage in the examples/ folder
1. In the [1st example](examples/01_one_sample/example.c), we define a small model, and draw one sample from it
2. In the [2nd example](examples/02_many_samples/example.c), we define a small model, and return many samples
3. In the [3rd example](examples/03_gcc_nested_function/example.c), we use a gcc extension—nested functions—to rewrite the code from point 2. in a more linear way.
4. In the [4th example](examples/04_sample_from_cdf_simple/example.c), we define some simple cdfs, and we draw samples from those cdfs. We see that this approach is slower than using the built-in samplers, e.g., the normal sampler.
5. In the [5th example](examples/05_sample_from_cdf_beta/example.c), we define the cdf for the beta distribution, and we draw samples from it.
6. In the [6th example](examples/06_gamma_beta/example.c), we take samples from simple gamma and beta distributions, using the samplers provided by this library.
7. In the [7th example](examples/07_ci_beta/example.c), we get the 90% confidence interval of a beta distribution
8. The [8th example](examples/08_nuclear_war/example.c) translates the models from Eli and Nuño from [Samotsvety Nuclear Risk Forecasts — March 2022](https://forum.nunosempere.com/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022#Nu_o_Sempere) into squiggle.c, then creates a mixture from both, and returns the mean probability of death per month and the 90% confidence interval.
8. The [9th example](examples/09_burn_10kg_fat/example.c) estimates how many minutes per day I would have to jump rope in order to lose 10kg of fat in half a year.
1. In the [1st example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/01_one_sample/example.c), we define a small model, and draw one sample from it
2. In the [2nd example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/02_many_samples/example.c), we define a small model, and return many samples
3. In the [3rd example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/03_gcc_nested_function/example.c), we use a gcc extension—nested functions—to rewrite the code from point 2. in a more linear way.
4. In the [4th example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/04_sample_from_cdf_simple/example.c), we define some simple cdfs, and we draw samples from those cdfs. We see that this approach is slower than using the built-in samplers, e.g., the normal sampler.
5. In the [5th example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/05_sample_from_cdf_beta/example.c), we define the cdf for the beta distribution, and we draw samples from it.
6. In the [6th example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/06_gamma_beta/example.c), we take samples from simple gamma and beta distributions, using the samplers provided by this library.
7. In the [7th example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/07_ci_beta/example.c), we get the 90% confidence interval of a beta distribution
8. The [8th example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/08_nuclear_war/example.c) translates the models from Eli and Nuño from [Samotsvety Nuclear Risk Forecasts — March 2022](https://forum.nunosempere.com/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022#Nu_o_Sempere) into squiggle.c, then creates a mixture from both, and returns the mean probability of death per month and the 90% confidence interval.
8. The [9th example](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/examples/09_burn_10kg_fat/example.c) estimates how many minutes per day I would have to jump rope in order to lose 10kg of fat in half a year.
## Commentary
### squiggle.c is short
[squiggle.c](squiggle.c) is less than 500 lines of C. The reader could just read it and grasp its contents.
[squiggle.c](https://git.nunosempere.com/personal/squiggle.c/src/branch/master/squiggle.c) is less than 500 lines of C. The reader could just read it and grasp its contents.
### Core strategy
@ -248,7 +248,7 @@ Std of lognormal(0.644931, 4.795860): 39976300.711166, vs expected std: 18577298
delta: -18537322405.459286, relative delta: -463.707799
```
What is happening in this case is that you are taking a normal, like normal(-0.195240, 4.883106), and you are exponentiating it to arrive at a lognormal. But normal(-0.195240, 4.883106) is going to have some noninsignificant weight on, say, 18. But exp(18) = 39976300, and points like it are going to end up a nontrivial amount to the analytical mean and standard deviation, even though they have little probability mass.
What is happening in this case is that you are taking a normal, like normal(-0.195240, 4.883106), and you are exponentiating it to arrive at a lognormal. But normal(-0.195240, 4.883106) is going to have some noninsignificant weight on, say, 18. But exp(18) = 39976300, and points like it are going to end up a nontrivial amount to the analytical mean and standard deviation, even though they have little probability mass&period;
The reader can also check that for more plausible real-world values, like those fitting a lognormal to a really wide 90% confidence interval from 10 to 10k, errors aren't eggregious:

@ -0,0 +1,90 @@
Webpages I am making available to my corner of the internet
===========================================================
Here is a list of internet services that I make freely available to friends and allies, broadly defined—if you are reading this, you qualify. These are ordered roughly in order of usefulness.
### search.nunosempere.com
[search.nunosempere.com](https://search.nunosempere.com/) is an instance of [Whoogle](https://github.com/benbusby/whoogle-search). It presents Google results as they were and as they should have been: without clutter and without advertisements.
Readers are welcome to make this their default search engine. The process to do this is a bit involved and depends on the browser, but can be found with a Whoogle search. In past years, I've had technical difficulties around once every six months, but tend to fix them quickly.
### forum.nunosempere.com
[forum.nunosempere.com](https://forum.nunosempere.com) is a frontend to the [Effective Altruism Forum](https://forum.effectivealtruism.org/) that I personally find soothing. It is *much* faster than the official frontend, more minimalistic, and offers an RSS endpoint for all posts [here](https://forum.nunosempere.com/feed).
```
$ time curl https://forum.effectivealtruism.org > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 847k 0 847k 0 0 439k 0 --:--:-- 0:00:01 --:--:-- 438k
real 0m1.945s
user 0m0.030s
sys 0m0.021s
$ time curl https://forum.nunosempere.com/frontpage > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 35091 100 35091 0 0 190k 0 --:--:-- --:--:-- --:--:-- 190k
real 0m0.195s
user 0m0.025s
sys 0m0.004s
```
If you use the EA Forum with some frequency, I'd recommend giving it and [ea.greaterwrong.com](https://ea.greaterwrong.com/) a spin.
### shapleyvalue.com
[shapleyvalue.com](http://shapleyvalue.com/) is an online calculator for [Shapley Values](https://wikiless.northboot.xyz/wiki/Shapley_value?lang=en). I wrote it for [this explainer](https://forum.effectivealtruism.org/s/XbCaYR3QfDaeuJ4By/p/XHZJ9i7QBtAJZ6byW) after realizing that no other quick calculators exist.
### Find a beta distribution which fits a given confidence interval
[trastos.nunosempere.com/fit-beta](https://trastos.nunosempere.com/fit-beta) is a POST endpoint to find a beta distribution that fits a given confidence interval.
```
curl -X POST -H "Content-Type: application/json" \
-d '{"ci_lower": "0.2", "ci_upper":"0.8", "ci_length": "0.95"}' \
https://trastos.nunosempere.com/fit-beta
```
I also provide a widget [here](https://nunosempere.com/blog/2023/03/15/fit-beta/) and an npm package [here](https://www.npmjs.com/package/fit-beta), which is probably more convenient than the endpoint.
### nunosempere.com/misc/proportional-approval-voting-calculator/
Proportional approval voting is a bit tricky to generalize to choosing candidates for more than one position, which is why little software for it exists. [This page](https://nunosempere.com/misc/proportional-approval-voting-calculator/) provides a samaple implementation. It was previously hosted [here](https://nunosempere.com/misc/proportional-approval-voting-calculator/).
### git.nunosempere.com
[git.nunosempere.com](https://git.nunosempere.com/) is my personal git server. It hosts some of my personal projects, and occasional backups of some open source projects worth preserving.
### video.nunosempere.com
[video.nunosempere.com](https://video.nunosempere.com) is a [peertube](https://github.com/Chocobozzz/PeerTube/) instance with some videos worth preserving.
### royalroad.nunosempere.com
A frontend for [Royal Road](https://www.royalroad.com/), a site which hosts online fiction but which has grown pretty cluttered. Reuses a whole lot of the code from forum.nunosempere.com.
### wikiless.nunosempere.com (added 27/08/2023)
A [frontend](https://wikiless.nunosempere.com/) for Wikipedia.
### gatitos.nunosempere.com
Shows a photo of two cute cats:
<img src="https://gatitos.nunosempere.com/">
### Also on this topic
- [Soothing Software](https://nunosempere.com/blog/2023/03/27/soothing-software/)
- [Hacking on rose](https://nunosempere.com/blog/2022/12/20/hacking-on-rose/)—in particular, readers might be interested in [this code](https://git.nunosempere.com/open.source/rosenrot/src/branch/master/plugins/style/style.js#L62) to block advertisements on Reddit and Twitter. It could be adapted for Firefox with an extension like [Stylus](https://addons.mozilla.org/en-US/firefox/addon/styl-us/).
- [Metaforecast](https://metaforecast.org/), which I started, and which is now maintained by Slava Matyuhin of QURI and myself.
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

@ -0,0 +1,111 @@
<h1>Incorporate keeping track of accuracy into X (previously Twitter)</h1>
<p><strong>tl;dr</strong>: Incorporate keeping track of accuracy into X<sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup>. This contributes to the goal of making X the chief source of information, and strengthens humanity by providing better epistemic incentives and better mechanisms to separate the wheat from the chaff in terms of getting at the truth together.</p>
<h2>Why do this?</h2>
<p><img src="https://images.nunosempere.com/blog/2023/08/19/keeping-track-of-accuracy-on-twitter/michael-dragon.jpg" alt="St Michael Killing the Dragon - public domain, via Wikimedia commons" style="width: 30% !important"/></p>
<ul>
<li>Because it can be done</li>
<li>Because keeping track of accuracy allows people to separate the wheat from the chaff at scale, which would make humanity more powerful, more <a href="https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/">formidable</a>.</li>
<li>Because it is an asymmetric weapon, like community notes, that help the good guys who are trying to get at what is true much more than the bad guys who are either not trying to do that or are bad at it.</li>
<li>Because you can&rsquo;t get better at learning true things if you aren&rsquo;t trying, and current social media platforms are, for the most part, not incentivizing that trying.</li>
<li>Because rival organizations&mdash;like the New York Times, Instagram Threads, or school textbooks&mdash;would be made more obsolete by this kind of functionality.</li>
</ul>
<h2>Core functionality</h2>
<p>I think that you can distill the core of keeping track of accuracy to three elements<sup id="fnref:2"><a href="#fn:2" rel="footnote">2</a></sup>: predict, resolve, and tally. You can see a minimal implementation of this functionality in &lt;60 lines of bash <a href="https://github.com/NunoSempere/PredictResolveTally/tree/master">here</a>.</p>
<h3>predict</h3>
<p>make a prediction. This prediction could take the form of</p>
<ol>
<li>a yes/no sentence, like &ldquo;By 2030, I say that Tesla will be worth $1T&rdquo;</li>
<li>a probability, like &ldquo;I say that there is a 70% chance that by 2030, Tesla will be worth $1T&rdquo;</li>
<li>a range, like &ldquo;I think that by 2030, Tesla&rsquo;s market cap will be worth between $800B and $5T&rdquo;</li>
<li>a probability distribution, like &ldquo;Here is my probability distribution over how likely each possible market cap of Tesla is by 2030&rdquo;</li>
<li>more complicated options, e.g., a forecasting function that gives an estimate of market cap at every point in time.</li>
</ol>
<p>I think that the sweet spot is on #2: asking for probabilities. #1 doesn&rsquo;t capture that we normally have uncertainty about events—e.g., in the recent superconductor debacle, we were not completely sure one way or the other until the end—, and it is tricky to have a system which scores both #3-#5 and #2. Particularly at scale, I would lean towards recommending using probabilities rather than something more ambitious, at first.</p>
<p>Note that each example gave both a statement that was being predicted, and a date by which the prediction is resolved.</p>
<h3>resolve</h3>
<p>Once the date of resolution has been reached, a prediction can be marked as true/false/ambiguous. Ambiguous resolutions are bad, because the people who have put effort into making a prediction feel like their time has been wasted, so it is good to minimize them.</p>
<p>You can have a few distinct methods of resolution. Here are a few:</p>
<ul>
<li>Every question has a question creator, who resolves it</li>
<li>Each person creates and resolves their own predictions</li>
<li>You have a community-notes style mechanism for resolving questions</li>
<li>You have a jury of randomly chosen peers who resolves the prediction</li>
<li>You have a jury of previously trusted members, who resolves the question</li>
<li>You can use a <a href="https://en.wikipedia.org/wiki/Keynesian_beauty_contest">Keynesian Beauty Contest</a>, like Kleros or UMA, where judges are rewarded for agreeing with the majority opinion of other judges. This disincentivizes correct resolutions for unpopular-but-true questions, so I would hesitate before using it.</li>
</ul>
<p>Note that you can have resolution methods that can be challeged, like the lower court/court of appeals/supreme court system in the US. For example, you could have a system where initially a question is resolved by a small number of randomly chosen jurors, but if someone gives a strong signal that they object to the resolution—e.g., if they pay for it, or if they spend one of a few &ldquo;appeals&rdquo; tokens—then the question is resolved by a larger pool of jurors.</p>
<p>Note that the resolution method will shape the flavour of your prediction functionality, and constrain the types of questions that people can forecast on. You can have a more anarchic system, where everyone can instantly create a question and predict on it. Then, people will create many more questions, but perhaps they will have a bias towards resolving questions in their own favour, and you will have slightly duplicate questions. Then you will get something closer to <a href="https://manifold.markets/">Manifold Markets</a>. Or you could have a mechanism where people propose questions and these are made robust to corner cases in their resolution criteria by volunteers, and then later resolved by a jury of volunteers. Then you will get something like <a href="https://www.metaculus.com/">Metaculus</a>, where you have fewer questions but these are of higher quality and have more reliable resolutions.</p>
<p>Ultimately, I&rsquo;m not saying that the resolution method is unimportant. But I think there is a temptation to nerd out too much about the specifics, and having some resolution method that is transparently outlined and shipping it quickly seems much better than getting stuck at this step.</p>
<h3>tally</h3>
<p>Lastly, present the information about what proportion of people&rsquo;s predictions come true. E.g., of the times I have predicted a 60% likelihood of something, how often has it come true? Ditto for other percentages. These are normally binned to produce a calibration chart, like the following:</p>
<p><img src="https://images.nunosempere.com/blog/2023/08/19/keeping-track-of-accuracy-on-twitter/calibrationChart2.png" alt="my calibration chart from Good Judgment Open" /></p>
<p>On top of that starting point, you can also do more elaborate things:</p>
<ul>
<li>You can have a summary statistic—a proper scoring rule, like the Brier score or a log score—that summarizes how good you are at prediction &ldquo;in general&rdquo;. Possibly this might involve comparing your performance to the performance of people who predicted in the same questions.</li>
<li>Previously, you could have allowed people to bet against each other. Then, their profits would indicate how good they are. I think this might be too complicated at Twitter style, at least at first.</li>
</ul>
<p><a href="https://arxiv.org/abs/2106.11248">Here</a> is a review of some mistakes people have previously made when scoring these kinds of forecasts. For example, if you have some per-question accuracy reward, people will gravitate towards forecasting on easier rather than on more useful questions. These kinds of considerations are important, particularly since they will determine who will be at the top of some scoring leaderboard, if there is any such. Generally, <a href="https://arxiv.org/abs/1803.04585">Goodhart&rsquo;s law</a> is going to be a problem here. But again, having <em>some</em> tallying mechanism seems way better than the current information environment.</p>
<p>Once you have some tallying—whether a calibration chart, a score from a proper scoring rule, or some profit it Musk-Bucks<sup id="fnref:3"><a href="#fn:3" rel="footnote">3</a></sup>, such a tally could:</p>
<ul>
<li>be semi-prominently displayed so that people can look to it when deciding how much to trust an account,</li>
<li>be used by X&rsquo;s algorithm to show more accurate accounts a bit more at the margin,</li>
<li>provide an incentive for people to be accurate,</li>
<li>provide a way for people who want to become more accurate to track their performance</li>
</ul>
<p>When dealing with catastrophes, wars, discoveries, and generally with events that challenge humanity&rsquo;s ability to figure out what is going on, having these mechanisms in place would help humanity make better decisions about who to listen to: to listen not to who is loudest but to who is most right.</p>
<h2>Conclusion</h2>
<p>X can do this. It would help with its goal of outcompeting other sources of information, and it would do this fair and square by improving humanity&rsquo;s collective ability to get at the truth. I don&rsquo;t know what other challenges and plans Musk has in store for X, but I would strongly consider adding this functionality to it.</p>
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
previously Twitter<a href="#fnref:1" rev="footnote">&#8617;</a></li>
<li id="fn:2">
Ok, four, if we count question creation and prediction as distinct. But I like <a href="https://bw.vern.cc/worm/wiki/Parahuman_Response_Team">PRT</a> as an acronym.<a href="#fnref:2" rev="footnote">&#8617;</a></li>
<li id="fn:3">
Using real dollars is probably illegal/too regulated in America.<a href="#fnref:3" rev="footnote">&#8617;</a></li>
</ol>
</div>

@ -0,0 +1,28 @@
Twitter Improvement Proposal: Incorporate Prediction Markets, Give a Death Blow to Punditry
===========================================================================================
**tl;dr**: Incorporate prediction markets into Twitter, give a death blow to punditry.
## The core idea
A prediction market is...
## Why do this?
Because it will usher humanity in an era of epistemic greatness.
## Caveats and downsides
Play money, though maybe with goods and services
Give 1000 doublons to all semi-active accounts on the 22/02/2022
## How to go about this?
One possiblity might be to acquihire [Manifold Markets](https://manifold.markets/) for something like $20-$50M. They are a team of competent engineers with a fair share of ex-Googlers, who have been doing a good job at building a prediction platform from scratch, and iterating on it. So one possible step might be to have the Manifold guys come up with demo functionality, and then pair them with a team who understands how one would go about doing this at Twitter-like scale.
However, I am not really cognizant of the technical challenges here, and it's possible that might not be the best approach ¯\_(ツ)_/¯
## In conclusion

@ -0,0 +1,199 @@
<!DOCTYPE HTML>
<html>
<head>
<title>Incorporate keeping track of accuracy into X (previously Twitter)</title>
<link rel="stylesheet" href="/pub/style/style.css" type="text/css" media="screen, handheld" title="default">
<link rel="shortcut icon" href="/favicon.ico" type="image/vnd.microsoft.icon">
<meta charset="UTF-8">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta property="og:image" content="https://cards.nunosempere.com/api/dynamic-image?endpoint=/blog/2023/08/19/keep-track-of-accuracy-on-twitter/">
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content="Measure is unceasing" />
<meta name="twitter:description" content="Incorporate keeping track of accuracy into X (previously Twitter)" />
<meta name="twitter:url" content="https://nunosempere.com/" />
<meta name="twitter:image" content="https://cards.nunosempere.com/api/dynamic-image?endpoint=/blog/2023/08/19/keep-track-of-accuracy-on-twitter/" />
<meta name="twitter:site" content="@NunoSempere" />
<script data-isso="//comments.nunosempere.com/" data-isso-max-comments-top="inf" data-isso-max-comments-nested="inf" data-isso-postbox-text-text-en="On the Internet, nobody knows you are a dog" src="//comments.nunosempere.com/js/embed.min.js"></script>
</head>
<body>
<header>
<h1><a href="/">Measure is unceasing </a><span id="headerSubTitle"></span></h1>
</header>
<nav id="side-bar" class="hidden-mobile">
<div>
<ul>
<li><a href="/blog/" class="thisPage">&raquo;<i> blog/</i></a></li>
<li><ul>
<li><a href="/blog/2019/">&rsaquo; 2019/</a></li>
<li><a href="/blog/2020/">&rsaquo; 2020/</a></li>
<li><a href="/blog/2021/">&rsaquo; 2021/</a></li>
<li><a href="/blog/2022/">&rsaquo; 2022/</a></li>
<li><a href="/blog/2023/" class="thisPage">&raquo;<i> 2023/</i></a></li>
<li><ul>
<li><a href="/blog/2023/01/">&rsaquo; 01/</a></li>
<li><a href="/blog/2023/02/">&rsaquo; 02/</a></li>
<li><a href="/blog/2023/03/">&rsaquo; 03/</a></li>
<li><a href="/blog/2023/04/">&rsaquo; 04/</a></li>
<li><a href="/blog/2023/05/">&rsaquo; 05/</a></li>
<li><a href="/blog/2023/06/">&rsaquo; 06/</a></li>
<li><a href="/blog/2023/07/">&rsaquo; 07/</a></li>
<li><a href="/blog/2023/08/" class="thisPage">&raquo;<i> 08/</i></a></li>
<li><ul>
<li><a href="/blog/2023/08/01/">&rsaquo; 01/</a></li>
<li><a href="/blog/2023/08/14/">&rsaquo; 14/</a></li>
<li><a href="/blog/2023/08/19/" class="thisPage">&raquo;<i> 19/</i></a></li>
<li><ul>
<li><a href="/blog/2023/08/19/keep-track-of-accuracy-on-twitter/" class="thisPage">&raquo;<i> keep track of accuracy on twitter/</i></a></li>
</ul></li>
</ul></li>
</ul></li>
</ul></li>
<li><a href="/consulting/">&rsaquo; consulting/</a></li>
<li><a href="/forecasting/">&rsaquo; forecasting/</a></li>
<li><a href="/gossip/">&rsaquo; gossip/</a></li>
<li><a href="/misc/">&rsaquo; misc/</a></li>
<li><a href="/research/">&rsaquo; research/</a></li>
<li><a href="/software/">&rsaquo; software/</a></li>
</ul>
</div>
</nav>
<article>
<h1>Incorporate keeping track of accuracy into X (previously Twitter)</h1>
<p><strong>tl;dr</strong>: Incorporate keeping track of accuracy into X<sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup>. This contributes to the goal of making X the chief source of information, and strengthens humanity by providing better epistemic incentives and better mechanisms to separate the wheat from the chaff in terms of getting at the truth together.</p>
<h2>Why do this?</h2>
<p><img src="https://images.nunosempere.com/blog/2023/08/19/keeping-track-of-accuracy-on-twitter/michael-dragon.png" alt="St Michael Killing the Dragon - public domain, via Wikimedia commons" style="width: 30% !important"/></p>
<ul>
<li>Because it can be done</li>
<li>Because keeping track of accuracy allows people to separate the wheat from the chaff at scale, which would make humanity more powerful, more <a href="https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/">formidable</a>.</li>
<li>Because it is an asymmetric weapon, like community notes, that help the good guys who are trying to get at what is true much more than the bad guys who are either not trying to do that or are bad at it.</li>
<li>Because you can&rsquo;t get better at learning true things if you aren&rsquo;t trying, and current social media platforms are, for the most part, not incentivizing that trying.</li>
<li>Because rival organizations&mdash;like the New York Times, Instagram Threads, or school textbooks&mdash;would be made more obsolete by this kind of functionality.</li>
</ul>
<h2>Core functionality</h2>
<p>I think that you can distill the core of keeping track of accuracy to three elements<sup id="fnref:2"><a href="#fn:2" rel="footnote">2</a></sup>: predict, resolve, and tally. You can see a minimal implementation of this functionality in &lt;60 lines of bash <a href="https://github.com/NunoSempere/PredictResolveTally/tree/master">here</a>.</p>
<h3>predict</h3>
<p>make a prediction. This prediction could take the form of</p>
<ol>
<li>a yes/no sentence, like &ldquo;By 2030, I say that Tesla will be worth $1T&rdquo;</li>
<li>a probability, like &ldquo;I say that there is a 70% chance that by 2030, Tesla will be worth $1T&rdquo;</li>
<li>a range, like &ldquo;I think that by 2030, Tesla&rsquo;s market cap will be worth between $800B and $5T&rdquo;</li>
<li>a probability distribution, like &ldquo;Here is my probability distribution over how likely each possible market cap of Tesla is by 2030&rdquo;</li>
<li>more complicated options, e.g., a forecasting function that gives an estimate of market cap at every point in time.</li>
</ol>
<p>I think that the sweet spot is on #2: asking for probabilities. #1 doesn&rsquo;t capture that we normally have uncertainty about events—e.g., in the recent superconductor debacle, we were not completely sure one way or the other until the end—, and it is tricky to have a system which scores both #3-#5 and #2. Particularly at scale, I would lean towards recommending using probabilities rather than something more ambitious, at first.</p>
<p>Note that each example gave both a statement that was being predicted, and a date by which the prediction is resolved.</p>
<h3>resolve</h3>
<p>Once the date of resolution has been reached, a prediction can be marked as true/false/ambiguous. Ambiguous resolutions are bad, because the people who have put effort into making a prediction feel like their time has been wasted, so it is good to minimize them.</p>
<p>You can have a few distinct methods of resolution. Here are a few:</p>
<ul>
<li>Every question has a question creator, who resolves it</li>
<li>Each person creates and resolves their own predictions</li>
<li>You have a community-notes style mechanism for resolving questions</li>
<li>You have a jury of randomly chosen peers who resolves the prediction</li>
<li>You have a jury of previously trusted members, who resolves the question</li>
<li>You can use a <a href="https://en.wikipedia.org/wiki/Keynesian_beauty_contest">Keynesian Beauty Contest</a>, like Kleros or UMA, where judges are rewarded for agreeing with the majority opinion of other judges. This disincentivizes correct resolutions for unpopular-but-true questions, so I would hesitate before using it.</li>
</ul>
<p>Note that you can have resolution methods that can be challeged, like the lower court/court of appeals/supreme court system in the US. For example, you could have a system where initially a question is resolved by a small number of randomly chosen jurors, but if someone gives a strong signal that they object to the resolution—e.g., if they pay for it, or if they spend one of a few &ldquo;appeals&rdquo; tokens—then the question is resolved by a larger pool of jurors.</p>
<p>Note that the resolution method will shape the flavour of your prediction functionality, and constrain the types of questions that people can forecast on. You can have a more anarchic system, where everyone can instantly create a question and predict on it. Then, people will create many more questions, but perhaps they will have a bias towards resolving questions in their own favour, and you will have slightly duplicate questions. Then you will get something closer to <a href="https://manifold.markets/">Manifold Markets</a>. Or you could have a mechanism where people propose questions and these are made robust to corner cases in their resolution criteria by volunteers, and then later resolved by a jury of volunteers. Then you will get something like <a href="https://www.metaculus.com/">Metaculus</a>, where you have fewer questions but these are of higher quality and have more reliable resolutions.</p>
<p>Ultimately, I&rsquo;m not saying that the resolution method is unimportant. But I think there is a temptation to nerd out too much about the specifics, and having some resolution method that is transparently outlined and shipping it quickly seems much better than getting stuck at this step.</p>
<h3>tally</h3>
<p>Lastly, present the information about what proportion of people&rsquo;s predictions come true. E.g., of the times I have predicted a 60% likelihood of something, how often has it come true? Ditto for other percentages. These are normally binned to produce a calibration chart, like the following:</p>
<p><img src="https://images.nunosempere.com/blog/2023/08/19/keeping-track-of-accuracy-on-twitter/calibrationChart2.png" alt="my calibration chart from Good Judgment Open" /></p>
<p>On top of that starting point, you can also do more elaborate things:</p>
<ul>
<li>You can have a summary statistic—a proper scoring rule, like the Brier score or a log score—that summarizes how good you are at prediction &ldquo;in general&rdquo;. Possibly this might involve comparing your performance to the performance of people who predicted in the same questions.</li>
<li>Previously, you could have allowed people to bet against each other. Then, their profits would indicate how good they are. I think this might be too complicated at Twitter style, at least at first.</li>
</ul>
<p><a href="https://arxiv.org/abs/2106.11248">Here</a> is a review of some mistakes people have previously made when scoring these kinds of forecasts. For example, if you have some per-question accuracy reward, people will gravitate towards forecasting on easier rather than on more useful questions. These kinds of considerations are important, particularly since they will determine who will be at the top of some scoring leaderboard, if there is any such. Generally, <a href="https://arxiv.org/abs/1803.04585">Goodhart&rsquo;s law</a> is going to be a problem here. But again, having <em>some</em> tallying mechanism seems way better than the current information environment.</p>
<p>Once you have some tallying—whether a calibration chart, a score from a proper scoring rule, or some profit it Musk-Bucks<sup id="fnref:3"><a href="#fn:3" rel="footnote">3</a></sup>, such a tally could:</p>
<ul>
<li>be semi-prominently displayed so that people can look to it when deciding how much to trust an account,</li>
<li>be used by X&rsquo;s algorithm to show more accurate accounts a bit more at the margin,</li>
<li>provide an incentive for people to be accurate,</li>
<li>provide a way for people who want to become more accurate to track their performance</li>
</ul>
<p>When dealing with catastrophes, wars, discoveries, and generally with events that challenge humanity&rsquo;s ability to figure out what is going on, having these mechanisms in place would help humanity make better decisions about who to listen to: to listen not to who is loudest but to who is most right.</p>
<h2>Conclusion</h2>
<p>X can do this. It would help with its goal of outcompeting other sources of information, and it would do this fair and square by improving humanity&rsquo;s collective ability to get at the truth. I don&rsquo;t know what other challenges and plans Musk has in store for X, but I would strongly consider adding this functionality to it.</p>
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
previously Twitter<a href="#fnref:1" rev="footnote">&#8617;</a></li>
<li id="fn:2">
Ok, four, if we count question creation and prediction as distinct. But I like <a href="https://bw.vern.cc/worm/wiki/Parahuman_Response_Team">PRT</a> as an acronym.<a href="#fnref:2" rev="footnote">&#8617;</a></li>
<li id="fn:3">
Using real dollars is probably illegal/too regulated in America.<a href="#fnref:3" rev="footnote">&#8617;</a></li>
</ol>
</div>
</article>
<footer class="hidden-mobile">
<br class="doNotDisplay doNotPrint" />
<div style="margin-right: auto;">Powered by <a href="http://werc.cat-v.org/">werc</a>, <a href="https://alpinelinux.org/">alpine</a> and <a href="https://nginx.org/en/">nginx</a></div>
<!-- TODO: wait until duckduckgo indexes site
<form action="https://duckduckgo.com/" method="get">
<input type="hidden" name="sites" value="nunosempere.com">
<input type="search" name="q">
<input type="submit" value="Search">
</form>
-->
</footer>
</body></html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 MiB

@ -0,0 +1,95 @@
Incorporate keeping track of accuracy into X (previously Twitter)
====
**tl;dr**: Incorporate keeping track of accuracy into X<sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup>. This contributes to the goal of making X the chief source of information, and strengthens humanity by providing better epistemic incentives and better mechanisms to separate the wheat from the chaff in terms of getting at the truth together.
## Why do this?
<p><img src="https://images.nunosempere.com/blog/2023/08/19/keeping-track-of-accuracy-on-twitter/michael-dragon.png" alt="St Michael Killing the Dragon - public domain, via Wikimedia commons" style="width: 30% !important"/></p>
- Because it can be done
- Because keeping track of accuracy allows people to separate the wheat from the chaff at scale, which would make humanity more powerful, more [formidable](https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/).
- Because it is an asymmetric weapon, like community notes, that help the good guys who are trying to get at what is true much more than the bad guys who are either not trying to do that or are bad at it.
- Because you can't get better at learning true things if you aren't trying, and current social media platforms are, for the most part, not incentivizing that trying.
- Because rival organizations---like the New York Times, Instagram Threads, or school textbooks---would be made more obsolete by this kind of functionality.
## Core functionality
I think that you can distill the core of keeping track of accuracy to three elements<sup id="fnref:2"><a href="#fn:2" rel="footnote">2</a></sup>: predict, resolve, and tally. You can see a minimal implementation of this functionality in <60 lines of bash [here](https://github.com/NunoSempere/PredictResolveTally/tree/master).
### predict
make a prediction. This prediction could take the form of
1. a yes/no sentence, like "By 2030, I say that Tesla will be worth $1T"
2. a probability, like "I say that there is a 70% chance that by 2030, Tesla will be worth $1T"
3. a range, like "I think that by 2030, Tesla's market cap will be worth between $800B and $5T"
4. a probability distribution, like "Here is my probability distribution over how likely each possible market cap of Tesla is by 2030"
5. more complicated options, e.g., a forecasting function that gives an estimate of market cap at every point in time.
I think that the sweet spot is on #2: asking for probabilities. #1 doesn't capture that we normally have uncertainty about events—e.g., in the recent superconductor debacle, we were not completely sure one way or the other until the end—, and it is tricky to have a system which scores both #3-#5 and #2. Particularly at scale, I would lean towards recommending using probabilities rather than something more ambitious, at first.
Note that each example gave both a statement that was being predicted, and a date by which the prediction is resolved.
### resolve
Once the date of resolution has been reached, a prediction can be marked as true/false/ambiguous. Ambiguous resolutions are bad, because the people who have put effort into making a prediction feel like their time has been wasted, so it is good to minimize them.
You can have a few distinct methods of resolution. Here are a few:
- Every question has a question creator, who resolves it
- Each person creates and resolves their own predictions
- You have a community-notes style mechanism for resolving questions
- You have a jury of randomly chosen peers who resolves the prediction
- You have a jury of previously trusted members, who resolves the question
- You can use a [Keynesian Beauty Contest](https://en.wikipedia.org/wiki/Keynesian_beauty_contest), like Kleros or UMA, where judges are rewarded for agreeing with the majority opinion of other judges. This disincentivizes correct resolutions for unpopular-but-true questions, so I would hesitate before using it.
Note that you can have resolution methods that can be challeged, like the lower court/court of appeals/supreme court system in the US. For example, you could have a system where initially a question is resolved by a small number of randomly chosen jurors, but if someone gives a strong signal that they object to the resolution—e.g., if they pay for it, or if they spend one of a few "appeals" tokens—then the question is resolved by a larger pool of jurors.
Note that the resolution method will shape the flavour of your prediction functionality, and constrain the types of questions that people can forecast on. You can have a more anarchic system, where everyone can instantly create a question and predict on it. Then, people will create many more questions, but perhaps they will have a bias towards resolving questions in their own favour, and you will have slightly duplicate questions. Then you will get something closer to [Manifold Markets](https://manifold.markets/). Or you could have a mechanism where people propose questions and these are made robust to corner cases in their resolution criteria by volunteers, and then later resolved by a jury of volunteers. Then you will get something like [Metaculus](https://www.metaculus.com/), where you have fewer questions but these are of higher quality and have more reliable resolutions.
Ultimately, I'm not saying that the resolution method is unimportant. But I think there is a temptation to nerd out too much about the specifics, and having some resolution method that is transparently outlined and shipping it quickly seems much better than getting stuck at this step.
### tally
Lastly, present the information about what proportion of people's predictions come true. E.g., of the times I have predicted a 60% likelihood of something, how often has it come true? Ditto for other percentages. These are normally binned to produce a calibration chart, like the following:
![my calibration chart from Good Judgment Open](https://images.nunosempere.com/blog/2023/08/19/keeping-track-of-accuracy-on-twitter/calibrationChart2.png)
On top of that starting point, you can also do more elaborate things:
- You can have a summary statistic—a proper scoring rule, like the Brier score or a log score—that summarizes how good you are at prediction "in general". Possibly this might involve comparing your performance to the performance of people who predicted in the same questions.
- Previously, you could have allowed people to bet against each other. Then, their profits would indicate how good they are. I think this might be too complicated at Twitter style, at least at first.
[Here](https://arxiv.org/abs/2106.11248) is a review of some mistakes people have previously made when scoring these kinds of forecasts. For example, if you have some per-question accuracy reward, people will gravitate towards forecasting on easier rather than on more useful questions. These kinds of considerations are important, particularly since they will determine who will be at the top of some scoring leaderboard, if there is any such. Generally, [Goodhart's law](https://arxiv.org/abs/1803.04585) is going to be a problem here. But again, having *some* tallying mechanism seems way better than the current information environment.
Once you have some tallying—whether a calibration chart, a score from a proper scoring rule, or some profit it Musk-Bucks<sup id="fnref:3"><a href="#fn:3" rel="footnote">3</a></sup>, such a tally could:
- be semi-prominently displayed so that people can look to it when deciding how much to trust an account,
- be used by X's algorithm to show more accurate accounts a bit more at the margin,
- provide an incentive for people to be accurate,
- provide a way for people who want to become more accurate to track their performance
When dealing with catastrophes, wars, discoveries, and generally with events that challenge humanity's ability to figure out what is going on, having these mechanisms in place would help humanity make better decisions about who to listen to: to listen not to who is loudest but to who is most right.
## Conclusion
X can do this. It would help with its goal of outcompeting other sources of information, and it would do this fair and square by improving humanity's collective ability to get at the truth. I don't know what other challenges and plans Musk has in store for X, but I would strongly consider adding this functionality to it.
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
previously Twitter<a href="#fnref:1" rev="footnote">&#8617;</a></li>
<li id="fn:2">
Ok, four, if we count question creation and prediction as distinct. But I like <a href="https://bw.vern.cc/worm/wiki/Parahuman_Response_Team">PRT</a> as an acronym.<a href="#fnref:2" rev="footnote">&#8617;</a></li>
<li id="fn:3">
Using real dollars is probably illegal/too regulated in America.<a href="#fnref:3" rev="footnote">&#8617;</a></li>
</ol>
</div>
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>

@ -0,0 +1,103 @@
OpenPhil Grant Application
==========================
- Created: August 4, 2023 8:06 PM
- Status: submitted
## Proposal Summary (<20 words)
Manifund: longtermist grantmaker experimenting with regranting, impact certs and other funding mechanisms.
## Project Description (<750 words)
Manifund started in Jan 2023 by building a website for impact certs for the [ACX Mini-Grants round](https://manifund.org/rounds/acx-mini-grants); we then hosted impact certs for the [OpenPhil AI Worldviews contest](https://manifund.org/rounds/ai-worldviews). Since May, weve been focused on [a regranting program](https://manifund.org/rounds/regrants). Wed like to scale up our regranting program, and experiment with other funding mechanisms and initiatives.
Some things that distinguish us from other longtermist funders:
- Were steeped in the tech startup mindset: we move quickly, ship often, automate our workflows, and talk to our users.
- Were really into transparency: all projects, writeups, and grant amounts are posted publicly on our site. We think this improves trust in the funding ecosystem, helps grantees understand what funders are interested in, and allows newer grantmakers to develop a legible track record.
- We care deeply about grantee experience: weve spent a lot of time on the other side of the table applying for grants, and are familiar with common pain points: long/unclear timelines, lack of funder feedback, confusing processes.
- We decentralize where it makes sense: regranting and impact certs are both funding models that play to the strengths of networks of experts. Central grantmaker time has been a bottleneck on nurturing impactful projects; we hope to fix this.
Overall, we hope to help OpenPhil decentralize funding to other decisionmakers in a way that rewards transparent, accountable, timely and cost-effective grants.
### Scaling up regranting
Regranting is a system where individuals are given small discretionary budgets to make grants from. This allow regrantors to find projects in their personal and professional networks, seed new projects based on their interests, and commit to grantees with little friction. Our current regrantors are drawn from Anthropic, OpenAI, Rethink Priorities, ARC Evals, CAIS, FAR AI, SERI MATS, 1DaySooner and others; see all regrantors [here](https://manifund.org/rounds/regrants?tab=regrants), and a recent payout report [here](https://manifund.substack.com/p/what-were-funding-weeks-2-4).
With further funding, wed like to:
Onboard new regrantors: we currently have a waitlist of qualified regrantors and get about one new strong application per week. We think we could find many more promising candidates over the next year, to sponsor with budgets of $50k to $400k.
Increase budgets of high-performing regrantors: this incentivizes regrantors to make good grants, delegates authority to people who have performed well in the past, and quantifies their track record.
Hire another team member: Our team is currently just 2 people, both of us are wearing a lot of hats, and one (Austin) spends ~30% of his time on Manifold. Were interested in finding a fulltime in-house grantmaker for grant evaluation, regrantor assessment, and public comms. Ideally, this person would take on a cofounder-like role and weigh in on Manifund strategy. Later, we may want to hire another engineer or ops role.
### Running a larger test of impact certs
Impact certs are a new mechanism for funding nonprofits. A philanthropic donor announces a large prize; projects compete for the prize and raise funding by selling “impact certs” (shares of prize winnings) to self-interested investors. This allows donors to pay directly for impact, while investors take on risk for return.
In the spring, we ran [a trial with Scott Alexander](https://manifund.org/rounds/acx-mini-grants) with a $40k prize pool; this round will conclude this September. Scott is still considering whether to do Round 2 of ACX Grants round through impact certs; if so, wed host that this fall. We also tried an impact cert round with the [OpenPhil AI Worldviews Contest](https://manifund.org/rounds/ai-worldviews), with somewhat less total engagement.
Our most ambitious goal would be a AI Safety impact cert round with yearly, large (>$1m) prize pool. With less funding, we might experiment with an e.g. EA Art prize to test the waters.
Other possible initiatives & funding experiments
- Setting up a Common App for longtermist funding. Were coordinating with LTFF on this, and would love to include funders like OpenPhil, Lightspeed, SFF and independent donors. This could alleviate a key pain point for grantees (each app takes a long time to write!) and help funding orgs find apps in their area of interest.
- Start “EA peer bonuses”, inspired by the Google peer bonus program. Wed let anyone nominate community members who have done valuable work, and pay out small (~$100) prizes.
- Run a 1-week “instant grants” program, where we make a really short application (eg 3 sentences and a link) for grants of $1000 and get back to applicants within 24 hours.
- Paying feedback prizes to users & regrantors for making helpful comments on projects, both to inform funding decisions and to give feedback to grantees.
## Why were a good fit for this project (300 words)
We have the right background to pursue philanthropic experiments:
- Austin founded Manifold, which involves many of the same skills as making Manifund go well: designing a good product, managing people and setting culture, iterating and knowing when to switch directions. Also, Manifold ventures outside the boundaries of the EA community (eg having been cited by Paul Graham and the NYT podcast Hard Fork); Manifund is likewise interested in expanding beyond EA, especially among its donor base. Austin is in a good position to make this happen, as hes spent most of his professional career in non-EA tech companies (Google) & startups (tech lead at Streamlit), has an impressive background by their lights, and shares many of their sensibilities.
- Rachel is early in her career, but has a strong understanding of EA ideas and community, having founded EA Tufts and worked on lots of EA events including EAGx Berkeley, Future Forum, Harvard and MIT AI safety retreats, and GCP workshops. She started web dev 8 months ago, but has gotten top-tier mentorship from Austin, and most importantly, built almost the entire Manifund site herself, which people have been impressed by the design and usability of.
## Approximate budget
Were seeking $5 million in unrestricted funding from OpenPhil. Our intended breakdown:
- $3m: Regrantor budgets. Raise budgets of current regrantors every few months according to prior performance; onboard new regrantors.
- We currently have 15 regrantors with an average budget $120k; wed like to sponsor 25 regrantors with an average budget $200k.
- Our existing regrantors already have a large wishlist of projects they think they can allocate eg above the current LTFF bar; we can provide examples on request.
- $1.5m: Impact certs and other funding experiments, as listed in project description.
- Our regranting program itself is an example of something we funded out of our discretionary experimental budget.
- $0.5m: General operational costs, including salaries for a team of 2-3 FTE, software, cloud costs, legal consultations, events we may want to run, etc.
In total, Manifund has raised ~$2.4m since inception from an anonymous donor ($1.5m), Future Fund ($0.5m) & SFF ($0.4m) and committed ~$0.8m of it. We intend to further fundraise up to a budget of $10m for the next year, seeking funding from groups such as SFF/Jaan Tallinn, Schmidt Ventures, YCombinator, and small to medium individual donors.
_Thanks to Joel, Gavriel, Marcus & Renan for feedback on this application._
## Appendix
### Cut
- This allows us to structure the site like a forum, with comments and votes. One vision of what we could be is “like the EA forum but oriented around projects”.
- We aim to model transparency ourselves, with our code, finances, and vast majority of meeting notes and internal docs visible to the public.
- Our general evaluation model is that regrantors screen for effectiveness, and Manifund screens for legality and safety, but in the case of projects with COIs or with a large portion of funding coming directly from donors, we should really be screening for effectiveness and would like to have someone on board whos better suited to do that
### Draft notes
- in the past did impact certs for ACX and OP AI Worldviews
- currently focused on regranting
- generally: would be good if EA funding were more diverse. Also faster, more transparent, possibly with better incentives.
- interested in scaling up regranting program:
- offer more budgets, and raise budgets according to past performance (S-process or something)
- looking for funding for regrantors from other sources, particularly interested in getting outside-of-EA funding, think were better suited to do that than e.g. LTFF because of Austins background and our branding, maybe OpenPhil could just cover ops.
- kind of want to hire another person, maybe someone with more grantmaking experience who can act like a reviewer and help us with strategy. Useful especially in cases where a regrantor wants to give to something with a COI or where a large portion of funds are coming from random users/donors instead of regrantors and we want to evaluate grants for real. Currently only ~1.75 people on our team.
- and doing more with impact certs: possibly will host Scotts next ACX round, ultimately interested in something like yearly big prize for AI safety where projects are initially funded through impact certs, and we might do medium things on the way to test whether this is a good idea and refine our approach.
- other experiments were considering:
- CommonApp with LTFF, maybe include lightspeed or other funders
- generally work on schemes for better coordination between funders
- start EA peer bonus program [EA peer bonus](https://www.notion.so/EA-peer-bonus-2f268e716a5e4f81acb3e9f642f6842f?pvs=21)
- do ~week long “instant” grants program [Instant grants](https://www.notion.so/Instant-grants-bf88f0b2ecb142fd890c462fad115037?pvs=21)
- paying users/regrantors retroactively for making really helpful comments on projects
- some things were doing differently from other funders that we think are promising:
- being really transparent: some people said in advance and we worried that this would be too limiting, but regrantors havent really complained about it so far. And we think it has a bunch of positive externalities: shows people what types of projects grantmakers are interested in funding and why, generates hype/publicity for cool projects (e.g. cavities, shrimp welfare), allows regrantors to generate a public trackrecord, generates trust
- relatedly, generally using software: makes it easy to be transparent, also makes it easy to facilitate conversations between grantees and grantmakers and other community members. Makes things faster and smoother, allows us to set defaults. Looks different, might appeal to e.g. rich tech ppl more.
- general attitude with Austins background also may be different/advantageous: move fast, do lots of user interviews and generally focus a lot on user experience
- giving small budgets to somewhat less well-known people so they can build up a track record
- some things were doing less well than other funders:
- generally being really careful/optimizing really hard about where the money goes or something. We heavily outsource donation decisions to regrantors, and ultimately just screen for legality/non-harmfulness. An analogy we use a lot is to the FDAs approval process: Manifund covers safety, regrantors cover efficacy. Currently there arent that many regrantors and just having them in a discord channel together facilitates lots of the good/necessary coordination, so not that worried about unilateralists curse rn, but could be a problem later. We arent screening $50k regrantors that rigorously in advance: we take applications, do an interview, ask the community health team, talk about it…but ultimately were pretty down to take bets. This means we probably want to up budgets more carefully.
- some ops stuff. Dont have a good way of sending money internationally.

@ -0,0 +1,26 @@
Quick thoughts on Manifund's application to Open Philanthropy
=============================================================
[Manifund](https://manifund.org/) is a new effort to improve, speed up and decentralize funding mechanisms in the broader Effective Altruism community, by some of the same people previously responsible for [Manifold](https://manifold.markets/home). Due to Manifold's policy of making a bunch of their internal documents public, you can see their application to Open Philanthropy [here](https://manifoldmarkets.notion.site/OpenPhil-Grant-Application-3c226068c3ae45eaaf4e6afd7d1763bc) (also a markdown backup [here](https://nunosempere.com/blog/2023/09/05/manifund-open-philanthropy/.src/application)).
Here is my perspective on this:
- They have given me a $50k regranting budget. It seems plausible that this colors my thinking.
- Manifold is highly technologically competent.
- [Effective Altruism Funds](https://funds.effectivealtruism.org/), which could be the closest point of comparison to Manifund, is not highly technologically competent. In particular, they have been historically tied to Salesforce, a den of mediocrity that slows speed, makes interacting with their systems annoying, and isn't that great across any one dimension.
- Previously, Manifold blew [Hypermind](https://predict.hypermind.com/hypermind/app.html), a previous play-money prediction market, completely out of the water. Try browsing markets, searching markets, making a prediction on Hypermind, and then try the same thing in Manifold.
- It seems very plausible to me that Manifund could do the same thing to CEA's Effective Altruism Funds: Create a product that is incomparably better by having a much higher technical and operational competence.
- One way to think about the cost and value of Manifund would be &Delta;(value of grant recipients) - &Delta;(costs of counterfactual funding method).
- The cost is pretty high, because Austin's counterfactual use of his capable engineering labour is pretty valuable.
- Value is still to be determined. One way might be to compare the value of grants made in 2023 year by Manifund, EA Funds, SFF, Open Philanthropy, etc., and see if there are any clear conclusions.
- Framing this as "improving EA Funds" would slow everything down and make it more mediocre, and would make Manifund less motivated by reducing their sense of ownership, so it doesn't make sense as a framework.
- Instead, it's worth keeping in mind that Manifund has the option to incorporate aspects of EA funds if it so chooses—like some grantmakers, questions to prospective grantees, public reports, etc.
- Manifund also has the option of identifying and then unblocking historical bottlenecks that EA funds has had, like slow response speed, not using grantmakers who are already extremely busy, etc.
A funny thing is that Manifund itself can't say, and probably doesn't think of their pathway to impact as: do things much better than EA funds by being absurdly more competent than them. It would look arrogant if they said it. But I can say it!
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>

@ -0,0 +1,124 @@
Count words in <50 lines of C
===
The Unix utility wc counts words. You can make simple, non-POSIX compatible version of it that solely counts words in [159 words and 42 lines of C](https://git.nunosempere.com/personal/wc/src/branch/master/src/wc.c). Or you can be like GNU and take 3615 words and 1034 lines to do something more complex.
## Desiderata
- Simple: Just count words as delimited by spaces, tabs, newlines.
- Allow: reading files, piping to the utility, and reading from stdin—concluded by pressing Ctrl+D.
- Separate utilities for counting different things, like lines and characters, into their own tools.
- Avoid off-by-one errors.
- Linux only.
- Small.
## Comparison with other versions of wc
The [version of wc.c](https://git.nunosempere.com/personal/wc/src/branch/master/src/wc.c) in this repository sits at 44 lines. It decides to read from stdin if the number of arguments fed to it is otherwise zero, and uses the standard C function getc to read character by character. It doesn't have flags, instead, there are further utilities in the src/extra/ folder for counting characters and lines, sitting at 32 and 35 lines of code, respectively. This version also has little error checking.
[Here](https://github.com/dspinellis/unix-history-repo/blob/Research-V7-Snapshot-Development/usr/src/cmd/wc.c) is a version of wc from UNIX V7, at 86 lines. It allows for counting characters, words and lines. I couldn't find a version in UNIX V6, so I'm guessing this is one of the earliest versions of this program. It decides to read from stdin if the number of arguments fed to it is zero, and reads character by character using the standard C getc function.
The busybox version ([git.busybox.net](https://git.busybox.net/busybox/tree/coreutils/wc.c)) of wc sits at 257 lines (162 with comments stripped), while striving to be [POSIX-compliant](https://pubs.opengroup.org/onlinepubs/9699919799/), meaning it has a fair number of flags and a bit of complexity. It reads character by character by using the standard getc function, and decides to read from stdin or not using its own fopen_or_warn_stdin function. It uses two GOTOs to get around, and has some incomplete Unicode support.
The [plan9](https://9p.io/sources/plan9/sys/src/cmd/wc.c) version implements some sort of table method in 331 lines. It uses plan9 rather than Unix libraries and methods, and seems to read from stdin if the number of args is 0.
The plan9port version of wc ([github](https://github.com/9fans/plan9port/blob/master/src/cmd/wc.c)) also implements some sort of table method, in 352 lines. It reads from stdin if the number of args is 0, and uses the Linux read function to read character by character.
The [OpenBSD](https://github.com/openbsd/src/blob/master/usr.bin/wc/wc.c) version is just *nice*. It reads from stdin by default, and uses a bit of buffering using read to speed things up. It defaults to using fstat when counting characters. It is generally pleasantly understandable, nice to read. I'm actually surprised at how pleasant it is to read.
The [FreeBSD version](https://cgit.freebsd.org/src/tree/usr.bin/wc/wc.c) sits at 367 lines. It has enough new things that I can't parse all that it's doing: in lines 137-143, what is capabilities mode? what is casper?, but otherwise it decides whether to read from stdin by the number of arguments, in line 157. It uses a combination of fstat and read, depending on the type of file.
Finally, the GNU utils version ([github](https://github.com/coreutils/coreutils/tree/master/src/wc.c), [savannah](http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f=src/wc.c;hb=HEAD)) is a bit over 1K lines of C. It does many things and checks many possible failure modes. I think it detects whether it should be reading from stdin using some very wrapped fstat, and it reads character by character using its own custom function.
So this utility started out reasonably small, then started getting more and more complex. [The POSIX committee](https://pubs.opengroup.org/onlinepubs/9699919799/) ended up codifying that complexity, and now we are stuck with it because even implementations like busybox which strive to be quite small try to keep to POSIX.
## Installation
```
git clone
make
sudo make install
## ^ installs to /bin/ww if there isn't a /bin/ww already
```
## Usage examples
```
echo "En un lugar de la Mancha" | ww
cat README.md | ww
ww README.md
ww # write something, then exit with Ctrl+D
```
## Relationship with cat-v
Does one really need to spend 1k lines of C code to count characters, words and lines? There are many versions of this rant one could give, but the best and probably best known is [this one](https://harmful.cat-v.org/cat-v/unix_prog_design.pdf) on cat -v. Busybox itself has given up here, and its [version of cat](https://git.busybox.net/busybox/tree/coreutils/cat.c) has the following comment:
> Rob had "cat -v" implemented as a separate applet, catv.
> See "cat -v considered harmful" at
> http://cm.bell-labs.com/cm/cs/doc/84/kp.ps.gz
> From USENIX Summer Conference Proceedings, 1983
>
> &#34;&#34;&#34;
>
> The talk reviews reasons for UNIX's popularity and shows, using UCB cat
> as a primary example, how UNIX has grown fat. cat isn't for printing
> files with line numbers, it isn't for compressing multiple blank lines,
> it's not for looking at non-printing ASCII characters, it's for
> concatenating files.
> We are reminded that ls isn't the place for code to break a single column
> into multiple ones, and that mailnews shouldn't have its own more
> processing or joke encryption code.
>
> &#34;&#34;&#34;
>
> I agree with the argument. Unfortunately, this ship has sailed (1983...).
> There are dozens of Linux distros and each of them has "cat" which supports -v.
> It's unrealistic for us to "reeducate" them to use our, incompatible way
> to achieve "cat -v" effect. The actual effect would be "users pissed off
> by gratuitous incompatibility".
I'm not sure that gratuitous incompatibility is so bad if it leads to utilities that are much simpler and easier to understand and inspect. That said, other projects aiming in this direction that I could find, like [tiny-core](https://github.com/keiranrowan/tiny-core/tree/master/src) or [zig-coreutils](https://github.com/leecannon/zig-coreutils) don't seem to be making much headway.
## To do
- [ ] Possible follow-up: Write simple versions for other coreutils. Would be a really nice project.
- [ ] Get some simple version of this working on a DuskOS/CollapseOS machine?
- [ ] Or, generally find a minimalistic kernel that could use some simple coreutils.
- [ ] Add man pages?
- [ ] Pitch to lwn.net as an article?
- [ ] Come back to writting these in zig
- [ ] ...
## Done or discarded
- [x] Look into how C utilities both read from stdin and from files.
- [x] Program first version of the utility
- [x] Compare with other implementations, see how they do it, after I've created my own version
- [x] Compare with gnu utils.
- [x] Compare with busybox implementation
- [x] Compare with other versions
- [x] Compare with other projects: <https://github.com/leecannon/zig-coreutils>, <https://github.com/keiranrowan/tiny-core/tree/master>.
- [x] Install to ww, but check that ww is empty (installing to wc2 or smth would mean that you don't save that many keypresses vs wc -w)
- [x] Look specifically at how other versions do stuff.
- [x] Distinguish between reading from stdin and reading from a file
- [x] If it doesn't have arguments, read from stdin.
- [x] Open files, read characters.
- [x] Write version that counts lines (lc)
- [x] Take into account what happens if file doesn't end in newline.
- [ ] ~~Count EOF as word & line separator~~
- [x] Document it
- [x] Document reading from user-inputed stdin (end with Ctrl+D)
- [x] add chc, or charcounter (cc is "c compiler")
- [x] Add licenses to historical versions before redistributing.
- [ ] ~~Could use zig? => Not for now~~
- [ ] ~~Maybe make some pull requests, if I'm doing something better? => doesn't seem like it~~
- [ ] ~~Write man files?~~
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 260 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 336 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 296 KiB

@ -0,0 +1,86 @@
*Epistemic status*: This post is blunt. Please see the extended disclaimer about negative feedback [here](https://forum.effectivealtruism.org/users/negativenuno). Consider not reading it if you work on the EA forum and don't have thick skin.
*tl;dr*: Once, the EA forum was a lean, mean machine. But it has become more bloated over time, and I don't like it. Separately, I don't think it's worth the roughly $2M/year[^twomillion] it costs, although I haven't modelled this in depth.
[^twomillion]: This is not a great amount in the grand scheme of things. Still, I am interested it in for two reasons: a) I'm working on a different piece, and this is a small, concrete case study that I can later reference, and b) I used to cherish the EA forum, and wrote over 100k words in it, only to see it become hostile to the type of disagreeable person that I am.
### The EA forum frontpage through time.
In [2018-2019](https://web.archive.org/web/20181115134712/https://forum.effectivealtruism.org/), the EA forum was a lean and mean machine:
![](https://images.nunosempere.com/blog/2023/10/02/ea-forum-2018-2019.png)
In 2020, there was a small redesign:
![](https://images.nunosempere.com/blog/2023/10/02/ea-forum-2020.png)
In 2021, the sidebar expands:
![](https://images.nunosempere.com/blog/2023/10/02/ea-forum-2021.png)
In 2022, the sidebar expands further, and pinned and curated posts take up more space:
![](https://images.nunosempere.com/blog/2023/10/02/ea-forum-2022.png)
In 2023, the sidebar splits in two. Pinned and curated posts acquire shiny symbols. Since recently, you can now add [reactions](https://forum.effectivealtruism.org/posts/fyCnfiL49T5HvMjvL/forum-update-10-new-features-oct-2023)
![](https://images.nunosempere.com/blog/2023/10/02/ea-forum-2023-bis.png)
### EA forum costs
Per [this comment](https://forum.effectivealtruism.org/posts/auhi3JoiqGhi5PqnQ/ama-we-re-the-forum-team-and-we-re-hiring-ask-us-anything?commentId=tjTkjLpBD59ybtcuX), the EA forum was spending circa $2M/year and employing 8 people as of July 2023. Per the [website](https://www.centreforeffectivealtruism.org/team#online-team) of the Center for Effective Altruism, the online team now has 6 members, including ¿one designer?
### EA forum moderation
In the beginning, when the EA forum was smaller, there was one moderator, Aaron Gertler, and all was well. Now, as the EA forum has grown, there is a larger pool of moderators, which protect the forum from spam and ban malicious users.
At the same time, the moderation team has acted [against](https://forum.effectivealtruism.org/posts/myp9Y9qJnpEEWhJF9/linch-s-shortform?commentId=DvPcdhnN7wcXpcB7Z) [disagreeable](https://forum.effectivealtruism.org/posts/Pfayu5Bf2apKreueD/?commentId=7cHvfzMLw2Jua9JPh) [people](https://forum.effectivealtruism.org/posts/FZFzqPYpTpGGRhyrj/does-ea-get-the-best-people-hypotheses-call-for-discussion?commentId=o3mahDSh4wuHTvsXh) [that](https://forum.effectivealtruism.org/posts/CfEAggjzSDrado6ZC/forecasting-our-world-in-data-the-next-100-years?commentId=upkHDudfh8c9FpM8u) [I](https://forum.effectivealtruism.org/posts/DB9ggzc5u9RMBosoz/wrong-lessons-from-the-ftx-catastrophe?commentId=cp6ngfKrqyjsuAQoo) [liked](https://forum.effectivealtruism.org/posts/4zjnFxGWYkEF4nqMi/how-could-we-have-avoided-this?commentId=Q7BQJFyEwk96Q6g95).
**Counterpoint**: When I review the [moderation comments](https://forum.effectivealtruism.org/moderatorComments) log, moderation actions seem infrequent. I guess that disagreeable people whom I like getting banned or warned was memorable to me, though.
### EA forum culture evolution
My impression is that the EA forum has been catering more to the [marginal user](https://nothinghuman.substack.com/p/the-tyranny-of-the-marginal-user); creating more introductory content, signposts, accessibility features, decreasing barriers to entry, etc. As the audience has increased, the marginal user is mostly a newbie. To me, the forum has been becoming more something like Reddit over time, which I dislike.
In stark contrast, consider [Hackernews](https://news.ycombinator.com/). Hackernews is an influential tech forum with [5M monthly users and 10M views/day](https://news.ycombinator.com/item?id=33454140). It has been able to retain its slim design through the years. Its moderation team has three persons, and they [*correspond with users via email*](https://news.ycombinator.com/item?id=34920400).
### Brief thoughts on cost-effectiveness.
The EA forum's existence is valuable. It is still a place for high-quality discussion, and it helps the EA community collaborate on research, coordinate, identify opportunities, make sense of incoming challenges. But on top of the EA forum's existence, are changes made in recent years positive at all, and worth $2M/year if so?
My individual perspective, my inside view, my personal guess is that a lean and mean version of the EA forum, in the style of Hackernews, would have done a better job for less money. From that perspective, the cost-effectiveness of the marginal $1.5M would be negative. Making a [marginal donation](https://forum.effectivealtruism.org/posts/PAco5oG579k2qzrh9/ltff-and-eaif-are-unusually-funding-constrained-right-now) to the EA Infrastructure or Long-term Future Fund would have been a better choice.
A different perspective one might take, that I don't know quite how to inhabit, might be to make the argument that actually, a small improvement in user experience leads to an increased chance that a person will become more committed to EA over its counterfactual, and that this is valuable. For example:
1. if the EA forum had 500k unique yearly visitors, and improvements to the forum in recent years mean that 1% of them continue interacting with the EA movement, that would lead to 5k counterfactual EAs. If think that creating more EAs is valuable, and we value this at $10k per EA, this would be worth $50M.
2. if the forum influenced five to a hundred decisions a day each worth $1k to $100k, and improved them by 1% to 20%, this would be worth ~20M a year.
The problem with those two hypothetical examples are that I don't buy the numbers. I think it's easy to greatly overestimate small percentages: when one is inclined to model something as having an influence of 1%, it's often a 0.01% instead. Less importantly, I think one should use Shapley values instead of counterfactual values in order to avoid double-counting and over-spending[^shapley].
[^shapley]: E.g., I think that if four agents (80,000 hours; a local EA group; a personal friend; the EA forum) are needed to make someone significantly more altruistic, each organization should get 1/4th of the credit. Otherwise the credit would sum up to more than 100%, and this hinders comparisons between opportunities. For a longer treatment of this topic, see [this post](https://forum.effectivealtruism.org/posts/XHZJ9i7QBtAJZ6byW/shapley-values-better-than-counterfactuals).
### Suggestions
If you are a user of the forum...
- Consider that the EA forum is currently pushing content on you. Make use of it if you are a newbie, but maybe actively filter it out once you are not.
- Consider using faster and more minimal frontends, like [ea.greaterwrong.com](https://ea.greaterwrong.com/) or my own opinionated [forum.nunosempere.com](https://forum.nunosempere.com).
- Consider interacting with the EA forum frontpage through [RSS](https://forum.effectivealtruism.org/feed.xml?view=community-rss&karmaThreshold=30) or the [all posts](https://forum.effectivealtruism.org/allPosts) page, not the frontpage.
- Host your own content in independent platforms, like substack or your own blog, and build your own audience, rather than relying on a platform you don't control. You can always cross-post it to the EA forum, but having an independent place to build your own audience and as a hedge costs you little.
If you are a CEA director or middle manager, you might have thought about this more than I have. Still, you might want to:
- Consider going back to ~1 developer and ~1 content person; save &gt$1M/year of your and your donors' money. My sense is that you are probably going to have to do this anyways, since you will probably not get enough money from donors[^donors], to continue your current course.[^course]
- Consider characterizing the EA forum's team role to be one of lightly shepharding discussion, not leading it or defining it.
- Consider reflecting on which incentives led to the creation of a larger EA Forum team. For example, Google has well-known incentives around managers being rewarded for leading larger teams to develop new products, and doesn't value maintenance, leading to a continuous churn and sunsetting of Google products. Might something similar, though at a lower scale, have happened here?
- As a distant fourth point, consider opening up authentication mechanisms so that users can make comments and posts using open-source frontends. This was previously doable through the greaterwrong frontend, but is no longer possible. It's possible that this might not be possible with your current software stack, or be too difficult, though.
[^donors]: Realistically, this is going to be mainly Open Philanthropy, as other donors can't support $2M/year.
[^course]: you could check this by creating a market on Manifold!
If you are working on the EA forum...
- I am probably missing a bunch of factors in this analysis. If you think that spending $2M/year, or having 6 to 8 people full-time on the EA forum is meaningful, you might want to post a BOTEC outlining why.
- I think that this post probably sounds very harsh, sorry. Note that these three things can be true at the same time: a) a more minimalistic forum would have been better, b) CEA leadership made a bad judgment call expanding the EA forum during the FTX days and will now have to downsize, c) given your work description, you did good work.
- It is possible that your current position is precarious, e.g., that you might be fired, or transferred to a different project within CEA.

@ -0,0 +1,5 @@
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>

@ -0,0 +1,5 @@
MARKDOWN=/usr/bin/markdown -f fencedcode -f ext -f footnote -f latex
build:
$(MARKDOWN) index.md > temp
cat title.md temp isso-snippet.txt > ../index.md
rm temp

@ -0,0 +1,3 @@
Brief thoughts on CEA's stewardship of the EA Forum
====================================

@ -0,0 +1,2 @@
Add much kinder stuff
Address Misha's feedback

@ -0,0 +1,110 @@
Brief thoughts on CEA's stewardship of the EA Forum
====================================
<p><em>Epistemic status</em>: This post is blunt. Please see the extended disclaimer about negative feedback <a href="https://forum.effectivealtruism.org/users/negativenuno">here</a>. Consider not reading it if you work on the EA forum and don&rsquo;t have thick skin.</p>
<p><em>tl;dr</em>: Once, the EA forum was a lean, mean machine. But it has become more bloated over time, and I don&rsquo;t like it. Separately, I don&rsquo;t think it&rsquo;s worth the roughly $2M/year<sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup> it costs, although I haven&rsquo;t modelled this in depth.</p>
<h3>The EA forum frontpage through time.</h3> <p>In <a href="https://web.archive.org/web/20181115134712/https://forum.effectivealtruism.org/">2018-2019</a>, the EA forum was a lean and mean machine:</p>
<p><img src="https://images.nunosempere.com/blog/2023/10/02/ea-forum-2018-2019.png" alt="" /></p>
<p>In 2020, there was a small redesign:</p>
<p><img src="https://images.nunosempere.com/blog/2023/10/02/ea-forum-2020.png" alt="" /></p>
<p>In 2021, the sidebar expands:</p>
<p><img src="https://images.nunosempere.com/blog/2023/10/02/ea-forum-2021.png" alt="" /></p>
<p>In 2022, the sidebar expands further, and pinned and curated posts take up more space:</p>
<p><img src="https://images.nunosempere.com/blog/2023/10/02/ea-forum-2022.png" alt="" /></p>
<p>In 2023, the sidebar splits in two. Pinned and curated posts acquire shiny symbols. Since recently, you can now add <a href="https://forum.effectivealtruism.org/posts/fyCnfiL49T5HvMjvL/forum-update-10-new-features-oct-2023">reactions</a></p>
<p><img src="https://images.nunosempere.com/blog/2023/10/02/ea-forum-2023-bis.png" alt="" /></p>
<h3>EA forum costs</h3>
<p>Per <a href="https://forum.effectivealtruism.org/posts/auhi3JoiqGhi5PqnQ/ama-we-re-the-forum-team-and-we-re-hiring-ask-us-anything?commentId=tjTkjLpBD59ybtcuX">this comment</a>, the EA forum was spending circa $2M/year and employing 8 people as of July 2023. Per the <a href="https://www.centreforeffectivealtruism.org/team#online-team">website</a> of the Center for Effective Altruism, the online team now has 6 members, including ¿one designer?</p>
<h3>EA forum moderation</h3>
<p>In the beginning, when the EA forum was smaller, there was one moderator, Aaron Gertler, and all was well. Now, as the EA forum has grown, there is a larger pool of moderators, which protect the forum from spam and ban malicious users.</p>
<p>At the same time, the moderation team has acted <a href="https://forum.effectivealtruism.org/posts/myp9Y9qJnpEEWhJF9/linch-s-shortform?commentId=DvPcdhnN7wcXpcB7Z">against</a> <a href="https://forum.effectivealtruism.org/posts/Pfayu5Bf2apKreueD/?commentId=7cHvfzMLw2Jua9JPh">disagreeable</a> <a href="https://forum.effectivealtruism.org/posts/FZFzqPYpTpGGRhyrj/does-ea-get-the-best-people-hypotheses-call-for-discussion?commentId=o3mahDSh4wuHTvsXh">people</a> <a href="https://forum.effectivealtruism.org/posts/CfEAggjzSDrado6ZC/forecasting-our-world-in-data-the-next-100-years?commentId=upkHDudfh8c9FpM8u">that</a> <a href="https://forum.effectivealtruism.org/posts/DB9ggzc5u9RMBosoz/wrong-lessons-from-the-ftx-catastrophe?commentId=cp6ngfKrqyjsuAQoo">I</a> <a href="https://forum.effectivealtruism.org/posts/4zjnFxGWYkEF4nqMi/how-could-we-have-avoided-this?commentId=Q7BQJFyEwk96Q6g95">liked</a>.</p>
<p><strong>Counterpoint</strong>: When I review the <a href="https://forum.effectivealtruism.org/moderatorComments">moderation comments</a> log, moderation actions seem infrequent. I guess that disagreeable people whom I like getting banned or warned was memorable to me, though.</p>
<h3>EA forum culture evolution</h3>
<p>My impression is that the EA forum has been catering more to the <a href="https://nothinghuman.substack.com/p/the-tyranny-of-the-marginal-user">marginal user</a>; creating more introductory content, signposts, accessibility features, decreasing barriers to entry, etc. As the audience has increased, the marginal user is mostly a newbie. To me, the forum has been becoming more something like Reddit over time, which I dislike.</p>
<p>In stark contrast, consider <a href="https://news.ycombinator.com/">Hackernews</a>. Hackernews is an influential tech forum with <a href="https://news.ycombinator.com/item?id=33454140">5M monthly users and 10M views/day</a>. It has been able to retain its slim design through the years. Its moderation team has three persons, and they <a href="https://news.ycombinator.com/item?id=34920400"><em>correspond with users via email</em></a>.</p>
<h3>Brief thoughts on cost-effectiveness.</h3>
<p>The EA forum&rsquo;s existence is valuable. It is still a place for high-quality discussion, and it helps the EA community collaborate on research, coordinate, identify opportunities, make sense of incoming challenges. But on top of the EA forum&rsquo;s existence, are changes made in recent years positive at all, and worth $2M/year if so?</p>
<p>My individual perspective, my inside view, my personal guess is that a lean and mean version of the EA forum, in the style of Hackernews, would have done a better job for less money. From that perspective, the cost-effectiveness of the marginal $1.5M would be negative. Making a <a href="https://forum.effectivealtruism.org/posts/PAco5oG579k2qzrh9/ltff-and-eaif-are-unusually-funding-constrained-right-now">marginal donation</a> to the EA Infrastructure or Long-term Future Fund would have been a better choice.</p>
<p>A different perspective one might take, that I don&rsquo;t know quite how to inhabit, might be to make the argument that actually, a small improvement in user experience leads to an increased chance that a person will become more committed to EA over its counterfactual, and that this is valuable. For example:</p>
<ol>
<li>if the EA forum had 500k unique yearly visitors, and improvements to the forum in recent years mean that 1% of them continue interacting with the EA movement, that would lead to 5k counterfactual EAs. If think that creating more EAs is valuable, and we value this at $10k per EA, this would be worth $50M.</li>
<li>if the forum influenced five to a hundred decisions a day each worth $1k to $100k, and improved them by 1% to 20%, this would be worth ~20M a year.</li>
</ol>
<p>The problem with those two hypothetical examples are that I don&rsquo;t buy the numbers. I think it&rsquo;s easy to greatly overestimate small percentages: when one is inclined to model something as having an influence of 1%, it&rsquo;s often a 0.01% instead. Less importantly, I think one should use Shapley values instead of counterfactual values in order to avoid double-counting and over-spending<sup id="fnref:2"><a href="#fn:2" rel="footnote">2</a></sup>.</p>
<h3>Suggestions</h3>
<p>If you are a user of the forum&hellip;</p>
<ul>
<li>Consider that the EA forum is currently pushing content on you. Make use of it if you are a newbie, but maybe actively filter it out once you are not.</li>
<li>Consider using faster and more minimal frontends, like <a href="https://ea.greaterwrong.com/">ea.greaterwrong.com</a> or my own opinionated <a href="https://forum.nunosempere.com">forum.nunosempere.com</a>.</li>
<li>Consider interacting with the EA forum frontpage through <a href="https://forum.effectivealtruism.org/feed.xml?view=community-rss&amp;karmaThreshold=30">RSS</a> or the <a href="https://forum.effectivealtruism.org/allPosts">all posts</a> page, not the frontpage.</li>
<li>Host your own content in independent platforms, like substack or your own blog, and build your own audience, rather than relying on a platform you don&rsquo;t control. You can always cross-post it to the EA forum, but having an independent place to build your own audience and as a hedge costs you little.</li>
</ul>
<p>If you are a CEA director or middle manager, you might have thought about this more than I have. Still, you might want to:</p>
<ul>
<li>Consider going back to ~1 developer and ~1 content person; save &gt;$1M/year of your and your donors' money. My sense is that you are probably going to have to do this anyways, since you will probably not get enough money from donors<sup id="fnref:3"><a href="#fn:3" rel="footnote">3</a></sup>, to continue your current course.<sup id="fnref:4"><a href="#fn:4" rel="footnote">4</a></sup></li>
<li>Consider characterizing the EA forum&rsquo;s team role to be one of lightly shepharding discussion, not leading it or defining it.</li>
<li>Consider reflecting on which incentives led to the creation of a larger EA Forum team. For example, Google has well-known incentives around managers being rewarded for leading larger teams to develop new products, and doesn&rsquo;t value maintenance, leading to a continuous churn and sunsetting of Google products. Might something similar, though at a lower scale, have happened here?</li>
<li>As a distant fourth point, consider opening up authentication mechanisms so that users can make comments and posts using open-source frontends. This was previously doable through the greaterwrong frontend, but is no longer possible. It&rsquo;s possible that this might not be possible with your current software stack, or be too difficult, though.</li>
</ul>
<p>If you are working on the EA forum&hellip;</p>
<ul>
<li>I am probably missing a bunch of factors in this analysis. If you think that spending $2M/year, or having 6 to 8 people full-time on the EA forum is meaningful, you might want to post a BOTEC outlining why.</li>
<li>I think that this post probably sounds very harsh, sorry. Note that these three things can be true at the same time: a) a more minimalistic forum would have been better, b) CEA leadership made a bad judgment call expanding the EA forum during the FTX days and will now have to downsize, c) given your work description, you did good work.</li>
<li>It is possible that your current position is precarious, e.g., that you might be fired, or transferred to a different project within CEA.</li>
</ul>
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
This is not a great amount in the grand scheme of things. Still, I am interested it in for two reasons: a) I&rsquo;m working on a different piece, and this is a small, concrete case study that I can later reference, and b) I used to cherish the EA forum, and wrote over 100k words in it, only to see it become hostile to the type of disagreeable person that I am.<a href="#fnref:1" rev="footnote">&#8617;</a></li>
<li id="fn:2">
E.g., I think that if four agents (80,000 hours; a local EA group; a personal friend; the EA forum) are needed to make someone significantly more altruistic, each organization should get &frac14;th of the credit. Otherwise the credit would sum up to more than 100%, and this hinders comparisons between opportunities. For a longer treatment of this topic, see <a href="https://forum.effectivealtruism.org/posts/XHZJ9i7QBtAJZ6byW/shapley-values-better-than-counterfactuals">this post</a>.<a href="#fnref:2" rev="footnote">&#8617;</a></li>
<li id="fn:3">
Realistically, this is going to be mainly Open Philanthropy, as other donors can&rsquo;t support $2M/year.<a href="#fnref:3" rev="footnote">&#8617;</a></li>
<li id="fn:4">
you could check this by creating a market on Manifold!<a href="#fnref:4" rev="footnote">&#8617;</a></li>
</ol>
</div>
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>

@ -0,0 +1,183 @@
### Introduction
In recent years there have been various attempts at using forecasting to discern the shape of the future development of artificial intelligence, like the [AI progress Metaculus tournament](https://www.metaculus.com/tournament/ai-progress/), the Forecasting Research Institute's [existential risk forecasting tournament/experiment](https://forum.effectivealtruism.org/posts/un42vaZgyX7ch2kaj/announcing-forecasting-existential-risks-evidence-from-a), [Samotsvety forecasts](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts) on the topic of AI progress and dangers, or various questions osn [INFER](https://www.infer-pub.com) on short-term technological progress.
Here is a list of reasons, written with early input from Misha Yagudin, on why using forecasting to make sense of AI developments can be tricky, as well some casual suggestions of ways forward.
### Excellent forecasters and Superforecasters™ have an imperfect fit for long-term questions
Here are some reasons why we might expect longer-term predictions to be more difficult:
1. No fast feedback loops for long-term questions. You can't get that many predict/check/improve cycles, because questions many years into the future, tautologically, take many years to resolve. There are shortcuts, like this [past-casting](https://www.quantifiedintuitions.org/pastcasting) app, but they are imperfect.
2. It's possible that short-term forecasters might acquire habits and intuitions that are good for forecasting short-term events, but bad for forecasting longer-term outcomes. For example, "things will change more slowly than you think" is a good heuristic to acquire for short-term predictions, but might be a bad heuristic for longer-term predictions, in the same sense that "people overestimate what they can do in a week, but underestimate what they can do in ten years". This might be particularly insidious to the extent that forecasters acquire intuitions which they can see are useful, but can't tell where they come from. In general, it seems unclear to what extent short-term forecasting skills would generalize to skill at longer-term predictions.
3. "Predict no change" in particular might do well, until it doesn't. Consider a world which has a 2% probability of seeing a worldwide pandemic, or some other large catastrophe. Then on average it will take 50 years for one to occur. But at that point, those predicting a 2% will have a poorer track record compared to those who are predicting a ~0%.
4. In general, we have been in a period of comparative technological stagnation, and forecasters might be adapted to that, in the same way that e.g., startups adapted to low interest rates.
5. Sub-sampling artifacts within good short-term forecasters are tricky. For example, my forecasting group Samotsvety is relatively bullish on transformative technological change from AI, whereas the Forecasting Research Institute's pick of forecasters for their existential risk survey was more bearish.
### Forecasting loses value when decontextualized, and current forecasting seems pretty decontextualized
Forecasting seems more valuable when it is commissioned to inform a specific decision. For instance, suppose that you were thinking of starting a new startup. Then it would be interesting to look at:
- The base rate of success for startups
- The base rate of success for all new businesses
- The base rate of success for startups that your friends and wider social circle have started
- Your personal rate of success at things in life
- The inside view: decomposing the space between now and potential success into steps and giving explicit probabilities to each step
- etc.
With this in mind, you could estimate the distribution of monetary returns to starting a startup, vs e.g., remaining an employee somewhere, and make the decision about what to do next with that estimate as an important factor.
But our impression is that AI forecasting hasn't been tied to specific decisions like that. Instead, it has tended to ask questions that might contribute to an "holistic understanding" of the field. For example, look at [Metaculus' AI progress tournament](https://www.metaculus.com/tournament/ai-progress/). The first few questions are:
- [How many Natural Language Processing e-prints will be published on arXiv over the 2021-01-14 to 2030-01-14 period?](https://www.metaculus.com/questions/6299/nlo-e-prints-2021-01-14-to-2030-01-14/)
- [What percent will software and information services contribute to US GDP in Q4 of 2030?](https://www.metaculus.com/questions/5958/it-as--of-gdp-in-q4-2030/)
- [What will be the average top price performance (in G3D Mark /$) of the best available GPU on the following dates?](https://www.metaculus.com/questions/11241/top-price-performance-of-gpus/)
My impression is that these questions don't have the immediacy of the previous example about startups failing; they aren't incredibly connected to impending decisions. You could draft questions which are more connected to impending decisions, like asking about whether specific AI safety research agendas would succeed, whether AI safety organizations that were previously funded would be funded again, or about how Open Philanthropy would evaluate its own AI safety grant-making in the future. However, these might be worse qua forecasting questions, or at least less Metaculus-like.
Overall, my impression is that forecasting questions about AI haven't been tied to specific decisions in a way that would make them incredibly valuable. This is curious, because if we look at the recent intellectual history of forecasting, its original raison d'être was to make US intelligence reports more useful, and those reports were directly tied to decisions. But now forecasts are presented separately. In our experience, it has often been more meaningful for forecasters to look in depth at a topic, and then produce a report which contains predictions, rather than producing predictions alone. But this doesn't happen often.
### The phenomena of interest are really imprecise
Misha Yagudin recalls that he knows of at least five different operationalizations of "human-level AGI". "Existential risk" is also ambiguous: does it refer to human extinction? or to losing a large fraction of possible human potential? if so, how is "human potential" specified?
To deal with this problem, one can:
- Not spend much time on operationalization, and accept that different forecasters will be talking about slightly different concepts.
- Try to specify concepts as precisely as possible, which involves a large amount of effort.
Neither of those options is great. Although some platforms like Manifold Markets and Polymarket are experimenting with under-specified questions, forecasting seems to work best when working with clear definitions. And the fact that this is expensive to do makes the topic of AI a bit of a bad fit for forecasting.
CSET had a great report trying to address this difficulty: [Future Indices](https://search.nunosempere.com/search?q=Future%20Indices). By having a few somewhat overlapping questions on a topic, e.g., a few distinct operationalizations of AGI, or a few proxies that capture different aspects of a domain of interest, we can have a summary index that better captures the fuzzy concept that we are trying to reason about than any one imperfect question.
That approach does make dealing with imprecise phenomena easier. But it increases costs, and a bundle of very similar questions can sometimes be dull to forecast on. It also doesn't solve this problem completely—some concepts, like "disempowering humanity", still remain very ambiguous.
Here are some high-level examples for which operationalization might still be a concern:
- You might want to ask about whether "AI will go well". The answer depends whether you compare this against "humanity's maximum potential" or with human extinction.
- You might want to ask whether any AI startup will "have powers akin to that of a world government".
- You might want to ask about whether measures taken by AI labs are "competent".
- You might want to ask about whether some AI system is "human-level", and find that there are wildly different operationalizations available for this
Here are some lower-level but more specific examples:
- Asking about FLOPs/$ seems like a tempting abstraction at first, because then you can estimate the FLOPs if the largest experiment is willing to spend $100M, $1B, $10B, etc. However, the abstraction ends up breaking down a bit when you look at specifics.
- Dollars are unspecified: For example, consider a group like [Inflection](https://www.reuters.com/technology/inflection-ai-raises-13-bln-funding-microsoft-others-2023-06-29/), which raises $1B from NVIDIA and Microsoft, and pays NVIDIA and Microsoft $1B to buy the chips and build the datacenters. Then the FLOPs/$ is very under-defined. OpenAI's deal with Microsoft also makes their FLOPS/$ ambiguous. If China becomes involved, their ability to restrict emigration and the pre-eminent role of their government in the economy also makes FLOPs/$ ambiguous.
- FLOPS are under-specified. Do you mean 64-bit precision bits? 16-bit precision? 8-bit precision? Do you count a [multiply-accumulate](https://wikiless.nunosempere.com/wiki/Multiply%E2%80%93accumulate_operation?lang=en) operation as one FLOP or two FLOPs?
- Asking about what percentage of labor is automated gets tricky when, instead of automating exactly past labor, you automatize a complement. For example, instead of automatizing a restaurant as is, you design the menu and experience that is most amenable to being automated. Portable music devices don't automate concert halls, they provide a different experience. These differences matter when asking short-term resolvable questions about automation.
- You might have some notion of a "leading lab". But operationalizing this is tricky, and simply enumerating current "leading labs" risks them being sidelined by an upstart, or that list not including important Chinese labs, etc. In our case, we have operationalized "leading lab" as "a lab that has performed a training run within 2 OOM of the largest ever at the time of the training run, within the last 2 years", which leans on the inclusive side, but requires keeping good data of what the largest training data is at each point in time, like [here](https://epochai.org/research/ml-trends), which might not be available in the future.
### Many questions don't resolve until it's already too late
Some of the questions we are most interested in, like "will AI permanently disempower humanity", "will there be a catastrophe caused by an AI system that kills >5%, or >95% of the human population", or "over the long-term, will humanity manage to harness AI to bring forth a flourishing future & achieve humanity's potential?" don't resolve until it's already too late.
This adds complications, because:
- Using short-term proxies rather than long-term outcomes brings its own problems
- Question resolution after transformative AI poses incentive problems. E.g., the answer incentivized by "will we get unimaginable wealth?" is "no", because if we do get unimaginable wealth, the reward is worth less.
- You may have ["prevention paradox"](https://en.wikipedia.org/wiki/Prevention_paradox) and fixed-point problems, where asking a probability reveals that some risk is high, after which you take measures to reduce that risk. You could have asked about the probability conditional on taking no measures, but then you can't resolve the forecasting question.
- You can chain forecasts, e.g., ask "what will [another group] predict that the probability of [some future outcome] is, in one year". But this adds layers of indirection and increases operational burdens.
Another way to frame this is that some stances about how the future of AI will go are unfalsifiable until a hypothesized treacherous turn in which humanity dies, but otherwise don't have strong enough views on short-term developments that they are willing to bet on short-term events. That seems to be the takeaway from the [late 2021 MIRI conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq), which didn't result in a string of $100k bets. While this is a disappointing position to be in, not sure that forecasting can do much here beyond pointing it out.
### More dataset gathering is needed
A pillar of Tetlock-style forecasting is looking at historical frequencies and extrapolating trends. For the topic of AI, it might be interesting to do some systematic data gathering, in the style of Our World In Data-type work, on measures like:
- Algorithmic improvement for [chess/image classification/weather prediction/...]: how much compute do you need for equivalent performance? what performance can you get for equivalent compute?
- Price of FLOPs
- Size of models
- Valuation of AI companies, number of AI companies through time
- Number of organizations which have trained a model within 1, 2 OOM of the largest model
- Performance on various capability benchmarks
- Very noisy proxies: Machine learning papers uploaded to arXiv, mentions in political speeches, mentions in American legislation, Google n-gram frequency, mentions in major newspaper headlines, patents, number of PhD students, number of Sino-American collaborations, etc.
- Answers to AI Impacts' survey of ML researchers through time
- Funding directed to AI safety through time
Note that datasets for some of these exist, but systematic data collection and presentation in the style of [Our World In Data](https://ourworldindata.org/) would greatly simplify creating forecasting pipelines about these questions, and also produce an additional tool for figuring out "what is going on" at a high level with AI. As an example, there is a difference between "Katja Grace polls ML researchers every few years", and "there are pipelines in place to make sure that that survey happens regularly, and forecasting questions are automatically created five years in advance and included in forecasting tournaments with well-known rewards". [Epoch](https://epochai.org/) is doing some good work in this domain.
### Forecasting AI hits the limits of Bayesianism in general
One could answer worries about Tetlock-style forecasting by saying: sure, that particular brand of forecasting isn't known to work on long-term predictions. But we have good theoretical reasons to think that Bayesianism is a good model of a perfect reasoner: see for example the review of [Cox's theorem](https://en.wikipedia.org/wiki/Cox%27s_theorem) in the first few chapters of [Probability Theory. The Logic of Science](https://annas-archive.org/md5/ddec0cf1982afa288d61db3e1f7d9323). So the thing that we should be doing is some version of subjective Bayesianism: keeping track of evidence and expressing and sharpening our beliefs with further evidence. See [here](https://nunosempere.com/blog/2022/08/31/on-cox-s-theorem-and-probabilistic-induction/) for a blog post making this argument in more length, though still informally.
But Bayesianism is a good model of a perfect reasoner with *infinite compute* and *infinite memory*, and in particular access to a bag of hypotheses which contains the true hypothesis. However, humans don't have infinite compute, and sometimes don't have the correct hypothesis in mind. [Knightian uncertainty](https://en.wikipedia.org/wiki/Knightian_uncertainty) and [Kuhnian revolutions](https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions)[^kuhn], [Black swans](https://en.wikipedia.org/wiki/Black_swan_theory) or [ambiguity aversion](https://en.wikipedia.org/wiki/Ambiguity_aversion) can be understood as consequences of normally being able to get around being approximately Bayesian, but sometimes getting bitten by that approximation being bounded and limited.
[^kuhn]: To spell this out more clearly, Kuhn was looking at the structure of scientific revolutions, and he notices that you have these "paradigm changes" every now in a while. As a naïve Bayesian, those paradigm changes are kinda confusing, and shouldn't have any special status. You should just have hypotheses, and they should just rise and fall in likelihood according to Bayes rule. But as a Bayesian who knows he has finite compute/memory, you can think of Kuhnian revolutions as encountering a true hypothesis which was outside your previous hypothesis space, and having to recalculate. On this topic, see [Just-in-time Bayesianism](https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/) or [A computable version of Solomonoff induction](https://nunosempere.com/blog/2023/03/01/computable-solomonoff/).
So there are some situations where we can get along by being approximately Bayesian, like coin flips and blackjack tables, domains where we pull our hairs and accept that we don't have infinite compute, like maybe some turbulent and chaotic physical systems or trying to predict dreams. Then we have some domains in which our ability to predict is meaningfully improving with time, like for example weather forecasts, where we can throw supercomputers and PhD students at it, because we care.
Now the question is where AI in particular falls within that spectrum. Personally, I suspect that it is a domain in which we are likely to not have the correct hypothesis in our prior set of hypotheses. For example, observers in general, but also the [Machine Intelligence Research Institute](https://intelligence.org/) in particular, failed to predict the rise of LLMs and to orient their efforts into making such systems safer, or into preventing such systems from coming into existence. I think this tweet, though maybe meant to be hurtful, is also informative about how tricky of a domain predicting AI progress is:
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">eliezer has IMO done more to accelerate AGI than anyone else.<br><br>certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc.</p>&mdash; Sam Altman (@sama) <a href="https://twitter.com/sama/status/1621621724507938816?ref_src=twsrc%5Etfw">February 3, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
However, consider the following caveat: imagine that instead of being interested in AI progress, we were interested in social science, and concerned that they couldn't arrive at the correct conclusion in cases where it was Republican-flavored. Then, one could notice that moving from p-values to likelihood ratios and Bayesian calculations wouldn't particularly help, since Bayesianism doesn't work unless your prior assigns a sufficiently high prior probability to the correct hypothesis. In this case, I think one easy mistake to make might be to just shrug and keep using p-values.
Similarly, for AI progress, one could notice that there is this subtle critique of forecasting and Bayesianism, and move to using, I don't know, scenario planning, which arguendo could be even worse, assume even more strongly that you know the shape of events to come, or not provide mechanisms for noticing that none of your hypotheses are worth much. I think that would be a mistake.
### Forecasting also has a bunch of other limitations as a genre
You can see forecasting as a type of genre. In it, someone writes a forecasting question, that question is deemed sufficiently robust, and then forecasters produce probabilities on it. As a genre, it has some limitations. For instance, when curious about a topic, not all roads lead to forecasting questions, and working in a project such that you *have* to produce forecasting questions could be oddly limited.
The conventions of the forecasting genre also dictate that forecasters will spend a fairly short amount of time researching before making a prediction. Partly this is a result of, for example, the scoring rule in Metaculus, which incentivized forecasting on many questions. Partly this is because forecasting platforms don't generally pay their forecasters, and even those that are [well funded](https://www.openphilanthropy.org/grants/applied-research-laboratory-for-intelligence-and-security-forecasting-platforms/) pay their forecasters badly, which leads to forecating being a hobby, rather than a full-time occupation. If one thinks that some questions require one to dig deep, and that one will otherwise easily produce shitty forecasts, this might be a particularly worrying feature of the genre.
Perhaps also as a result of its unprofitability, the forecasting community has also tended to see a large amount of churn, as hobbyist forecasters rise up in their regular careers and it becomes more expensive for them in terms of income lost to forecast on online platforms. You also see this churn in terms of employees of these forecasting platforms, where maybe someone creates some new project—e.g., Replication Markets, Metaculus' AI Progress Tournament, Ought's Elicit, etc.—but then that project dies as its principal person moves on to other topics.
Forecasting also makes use of scoring rules, which aim to reward forecasters such that they will be incentivized to input their true probabilities. Sadly, these often have the effect of incentivizing people to not collaborate and share information. This can be fixed by using more capital-intensive scoring rules that incentivize collaboration, like [these ones](https://github.com/SamotsvetyForecasting/optimal-scoring) or by grouping forecasters into teams such that they will be incentivized to share information within a team.
### As an aside, here is a casual review of the track record of long-term predictions
If we review the track record of superforecasters on longer term questions, we find that... there isn't that much evidence here—remember that the [ACE program](https://wikiless.nunosempere.com/wiki/Aggregative_Contingent_Estimation_Program?lang=en) started in 2010. In *Superforecasting* (2015), Tetlock wrote:
> Taleb, Kahneman, and I agree there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious—“there will be conflicts”—and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out. And yet, this sort of forecasting is common, even within institutions that should know better.
However, in p. 33 of [Long-Range Subjective-Probability Forecasts of Slow-Motion Variables in World Politics: Exploring Limits on Expert Judgment](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4377599) (2023), we see that the experts predicting "slow-motion variables" 25 years into the future attain a Brier score of 0.07, which isn't terrible.
Karnofsky, the erstwhile head-honcho of Open Philanthropy, [spins](https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/) some research by Arb and others as saying that the track record of futurists is "fine". [Here](https://danluu.com/futurist-predictions/) is a more thorough post by Dan Luu which concludes that:
> ...people who were into "big ideas" who use a few big hammers on every prediction combined with a cocktail party idea level of understanding of the particular subject to explain why a prediction about the subject would fall to the big hammer generally fared poorly, whether or not their favored big ideas were correct. Some examples of "big ideas" would be "environmental doomsday is coming and hyperconservation will pervade everything", "economic growth will create near-infinite wealth (soon)", "Moore's law is supremely important", "quantum mechanics is supremely important", etc. Another common trait of poor predictors is lack of anything resembling serious evaluation of past predictive errors, making improving their intuition or methods impossible (unless they do so in secret). Instead, poor predictors often pick a few predictions that were accurate or at least vaguely sounded similar to an accurate prediction and use those to sell their next generation of predictions to others.
>
> By contrast, people who had (relatively) accurate predictions had a deep understanding of the problem and also tended to have a record of learning lessons from past predictive errors. Due to the differences in the data sets between this post and Tetlock's work, the details are quite different here. The predictors that I found to be relatively accurate had deep domain knowledge and, implicitly, had access to a huge amount of information that they filtered effectively in order to make good predictions. Tetlock was studying people who made predictions about a wide variety of areas that were, in general, outside of their areas of expertise, so what Tetlock found was that people really dug into the data and deeply understood the limitations of the data, which allowed them to make relatively accurate predictions. But, although the details of how people operated are different, at a high-level, the approach of really digging into specific knowledge was the same.
### In comparison with other mechanisms for making sense of future AI developments, forecasting does OK.
Here are some mechanisms that the Effective Altruism community has historically used to try to make sense of possible dangers stemming from future AI developments:
- Books, like Bostrom's *Superintelligence*, which focused on the abstract properties of highly intelligent and capable agents in the limit.
- [Reports](https://www.openphilanthropy.org/research/?q=&focus-area%5B%5D=potential-risks-advanced-ai&content-type%5B%5D=research-reports) by Open Philanthropy. They either try to model AI progress in some detail, like [example 1](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines), or look at priors on technological development, like [example 2](https://www.openphilanthropy.org/research/semi-informative-priors-over-ai-timelines/).
- Mini think tanks, like Rethink Priorities, Epoch or AI impacts, which produce their own research and reports.
- Larger think tanks, like CSET, which produce reports like [this one](https://cset.georgetown.edu/publication/future-indices/) on Future Indices.
- Online discussion on lesswrong.com, that typically assumes things like: intelligence gains would be fast and explosive, that we should aim to design a mathematical construction that guarantees safety, that iteration would not be advisable in the face of fast intelligence gains, etc.
- Occasionally, theoretical or mathematical arguments or models of risk.
- One-off projects, like Drexler's [Comprehensive AI systems](https://www.fhi.ox.ac.uk/reframing/)
- Questions on forecasting platforms, like Metaculus, that try to solidly operationalize possible AI developments and dangers, and ask their forecasters to anticipate when and whether they will happen.
- Writeups from forecasting groups, like [Samotsvety](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts)
- More recently, the Forecasting Research Institute's [existential risk tournament/experiment writeup](https://forecastingresearch.org/xpt), which has tried to translate geopolitical forecasting mechanisms to predicting AI progress, with mixed success.
- Deferring to intellectuals, ideologues, and cheerleaders, like Toby Ord, Yudkowsky or MacAskill.
None of these options, as they currently exist, seem great. Forecasting has the hurdles discussed above, but maybe other mechanisms have even worse downsides, particularly the more pundit-like ones. Conversely, forecasting will be worse than deferring to a brilliant theoretical mind that is able to grasp the dynamics and subtleties of future AI development, like perhaps Drexler's on a good day.
Anyways, you might think that this forecasting thing shows potential. Were you a billionnaire, money would not be a limitation for you, so...
### In this situation, here are some strategies of which you might avail yourself
#### A. Accept the Faustian bargain
1. Make a bunch of short-term and long-term forecasting questions on AI progress
2. Wait for the short-term forecasting questions to resolve
3. Weight the forecasts for the long-term questions according to accuracy in the short term questions
This is a Faustian bargain because of the reasons reviewed above, chiefly that short-term forecasting performance is not a guarantee of longer term forecasting performance. A cheap version of this would be to look at the best short-term forecasters on the AI categories on Metaculus, and report their probabilities on a few AI and existential risk questions, which would be more interpretable than the current opaque "Metaculus prediction".
If you think that your other methods of making sense of what it's going on are sufficiently bad, you could choose this and hope for the best? Or, conversely, you could anchor your beliefs on a weighted aggregate of the best short-term forecasters and the most convincing theoretical views. Maybe things will be fine?
#### B. Attempt to do a Bayesianism
Go to the effort of rigorously formulating hypotheses, then keep track of incoming evidence for each hypothesis. If a new hypothesis comes in, try to do some version of [just-in-time Bayesianism](https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/), i.e., monkey-patch it after the fact. Once you are specifying your beliefs numerically, you can deploy some cute incentive mechanisms and [reward people who change your mind](https://github.com/SamotsvetyForecasting/optimal-scoring/blob/master/3-amplify-bayesian/amplify-bayesian.pdf).
Hope that keeping track of hypotheses about the development of AI at least gives you some discipline, and enables you to shed untrue hypotheses or frames a bit earlier than you otherwise would have. Have the discipline to translate the worldviews of various pundits into specific probabilities[^tetlock], and listen to them less when their predictions fail to come. And hope that going to the trouble of doing things that way allows you to anticipate stuff 6 months to 2 years sooner than you would have otherwise, and that it is worth the cost.
[^tetlock]: Back in the day, Tetlock received a [grant](https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlock-on-forecasting/#2-about-the-grant) to "systematically convert vague predictions made by prominent pundits into explicit numerical forecasts", but I haven't been able to track what happened to it, and I suspect it never happened.
#### C. Invest in better prediction pipelines as a whole
Try to build up some more speculative and [formidable](https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/) type of forecasting that can deal with the hurdles above. Be more explicit about the types of decisions that you want better foresight for, realize that you don't have the tools you need, and build someone up to be that for you.

@ -0,0 +1,5 @@
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>

@ -0,0 +1,5 @@
MARKDOWN=/usr/bin/markdown -f fencedcode -f ext -f footnote -f latex
build:
$(MARKDOWN) index.md > temp
cat title.md temp isso-snippet.txt > ../index.md
rm temp

@ -0,0 +1,3 @@
Hurdles of using forecasting as a tool for making sense of AI progress
======================================================================

@ -0,0 +1,234 @@
Hurdles of using forecasting as a tool for making sense of AI progress
======================================================================
<h3>Introduction</h3>
<p>In recent years there have been various attempts at using forecasting to discern the shape of the future development of artificial intelligence, like the <a href="https://www.metaculus.com/tournament/ai-progress/">AI progress Metaculus tournament</a>, the Forecasting Research Institute&rsquo;s <a href="https://forum.effectivealtruism.org/posts/un42vaZgyX7ch2kaj/announcing-forecasting-existential-risks-evidence-from-a">existential risk forecasting tournament/experiment</a>, <a href="https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts">Samotsvety forecasts</a> on the topic of AI progress and dangers, or various questions osn <a href="https://www.infer-pub.com">INFER</a> on short-term technological progress.</p>
<p>Here is a list of reasons, written with early input from Misha Yagudin, on why using forecasting to make sense of AI developments can be tricky, as well some casual suggestions of ways forward.</p>
<h3>Excellent forecasters and Superforecasters™ have an imperfect fit for long-term questions</h3>
<p>Here are some reasons why we might expect longer-term predictions to be more difficult:</p>
<ol>
<li>No fast feedback loops for long-term questions. You can&rsquo;t get that many predict/check/improve cycles, because questions many years into the future, tautologically, take many years to resolve. There are shortcuts, like this <a href="https://www.quantifiedintuitions.org/pastcasting">past-casting</a> app, but they are imperfect.</li>
<li>It&rsquo;s possible that short-term forecasters might acquire habits and intuitions that are good for forecasting short-term events, but bad for forecasting longer-term outcomes. For example, &ldquo;things will change more slowly than you think&rdquo; is a good heuristic to acquire for short-term predictions, but might be a bad heuristic for longer-term predictions, in the same sense that &ldquo;people overestimate what they can do in a week, but underestimate what they can do in ten years&rdquo;. This might be particularly insidious to the extent that forecasters acquire intuitions which they can see are useful, but can&rsquo;t tell where they come from. In general, it seems unclear to what extent short-term forecasting skills would generalize to skill at longer-term predictions.</li>
<li>&ldquo;Predict no change&rdquo; in particular might do well, until it doesn&rsquo;t. Consider a world which has a 2% probability of seeing a worldwide pandemic, or some other large catastrophe. Then on average it will take 50 years for one to occur. But at that point, those predicting a 2% will have a poorer track record compared to those who are predicting a ~0%.</li>
<li>In general, we have been in a period of comparative technological stagnation, and forecasters might be adapted to that, in the same way that e.g., startups adapted to low interest rates.</li>
<li>Sub-sampling artifacts within good short-term forecasters are tricky. For example, my forecasting group Samotsvety is relatively bullish on transformative technological change from AI, whereas the Forecasting Research Institute&rsquo;s pick of forecasters for their existential risk survey was more bearish.</li>
</ol>
<h3>Forecasting loses value when decontextualized, and current forecasting seems pretty decontextualized</h3>
<p>Forecasting seems more valuable when it is commissioned to inform a specific decision. For instance, suppose that you were thinking of starting a new startup. Then it would be interesting to look at:</p>
<ul>
<li>The base rate of success for startups</li>
<li>The base rate of success for all new businesses</li>
<li>The base rate of success for startups that your friends and wider social circle have started</li>
<li>Your personal rate of success at things in life</li>
<li>The inside view: decomposing the space between now and potential success into steps and giving explicit probabilities to each step</li>
<li>etc.</li>
</ul>
<p>With this in mind, you could estimate the distribution of monetary returns to starting a startup, vs e.g., remaining an employee somewhere, and make the decision about what to do next with that estimate as an important factor.</p>
<p>But our impression is that AI forecasting hasn&rsquo;t been tied to specific decisions like that. Instead, it has tended to ask questions that might contribute to an &ldquo;holistic understanding&rdquo; of the field. For example, look at <a href="https://www.metaculus.com/tournament/ai-progress/">Metaculus' AI progress tournament</a>. The first few questions are:</p>
<ul>
<li><a href="https://www.metaculus.com/questions/6299/nlo-e-prints-2021-01-14-to-2030-01-14/">How many Natural Language Processing e-prints will be published on arXiv over the 2021-01-14 to 2030-01-14 period?</a></li>
<li><a href="https://www.metaculus.com/questions/5958/it-as--of-gdp-in-q4-2030/">What percent will software and information services contribute to US GDP in Q4 of 2030?</a></li>
<li><a href="https://www.metaculus.com/questions/11241/top-price-performance-of-gpus/">What will be the average top price performance (in G3D Mark /$) of the best available GPU on the following dates?</a></li>
</ul>
<p>My impression is that these questions don&rsquo;t have the immediacy of the previous example about startups failing; they aren&rsquo;t incredibly connected to impending decisions. You could draft questions which are more connected to impending decisions, like asking about whether specific AI safety research agendas would succeed, whether AI safety organizations that were previously funded would be funded again, or about how Open Philanthropy would evaluate its own AI safety grant-making in the future. However, these might be worse qua forecasting questions, or at least less Metaculus-like.</p>
<p>Overall, my impression is that forecasting questions about AI haven&rsquo;t been tied to specific decisions in a way that would make them incredibly valuable. This is curious, because if we look at the recent intellectual history of forecasting, its original raison d'être was to make US intelligence reports more useful, and those reports were directly tied to decisions. But now forecasts are presented separately. In our experience, it has often been more meaningful for forecasters to look in depth at a topic, and then produce a report which contains predictions, rather than producing predictions alone. But this doesn&rsquo;t happen often.</p>
<h3>The phenomena of interest are really imprecise</h3>
<p>Misha Yagudin recalls that he knows of at least five different operationalizations of &ldquo;human-level AGI&rdquo;. &ldquo;Existential risk&rdquo; is also ambiguous: does it refer to human extinction? or to losing a large fraction of possible human potential? if so, how is &ldquo;human potential&rdquo; specified?</p>
<p>To deal with this problem, one can:</p>
<ul>
<li>Not spend much time on operationalization, and accept that different forecasters will be talking about slightly different concepts.</li>
<li>Try to specify concepts as precisely as possible, which involves a large amount of effort.</li>
</ul>
<p>Neither of those options is great. Although some platforms like Manifold Markets and Polymarket are experimenting with under-specified questions, forecasting seems to work best when working with clear definitions. And the fact that this is expensive to do makes the topic of AI a bit of a bad fit for forecasting.</p>
<p>CSET had a great report trying to address this difficulty: <a href="https://search.nunosempere.com/search?q=Future%20Indices">Future Indices</a>. By having a few somewhat overlapping questions on a topic, e.g., a few distinct operationalizations of AGI, or a few proxies that capture different aspects of a domain of interest, we can have a summary index that better captures the fuzzy concept that we are trying to reason about than any one imperfect question.</p>
<p>That approach does make dealing with imprecise phenomena easier. But it increases costs, and a bundle of very similar questions can sometimes be dull to forecast on. It also doesn&rsquo;t solve this problem completely—some concepts, like &ldquo;disempowering humanity&rdquo;, still remain very ambiguous.</p>
<p>Here are some high-level examples for which operationalization might still be a concern:</p>
<ul>
<li>You might want to ask about whether &ldquo;AI will go well&rdquo;. The answer depends whether you compare this against &ldquo;humanity&rsquo;s maximum potential&rdquo; or with human extinction.</li>
<li>You might want to ask whether any AI startup will &ldquo;have powers akin to that of a world government&rdquo;.</li>
<li>You might want to ask about whether measures taken by AI labs are &ldquo;competent&rdquo;.</li>
<li>You might want to ask about whether some AI system is &ldquo;human-level&rdquo;, and find that there are wildly different operationalizations available for this</li>
</ul>
<p>Here are some lower-level but more specific examples:</p>
<ul>
<li>Asking about FLOPs/$ seems like a tempting abstraction at first, because then you can estimate the FLOPs if the largest experiment is willing to spend $100M, $1B, $10B, etc. However, the abstraction ends up breaking down a bit when you look at specifics.
<ul>
<li>Dollars are unspecified: For example, consider a group like <a href="https://www.reuters.com/technology/inflection-ai-raises-13-bln-funding-microsoft-others-2023-06-29/">Inflection</a>, which raises $1B from NVIDIA and Microsoft, and pays NVIDIA and Microsoft $1B to buy the chips and build the datacenters. Then the FLOPs/$ is very under-defined. OpenAI&rsquo;s deal with Microsoft also makes their FLOPS/$ ambiguous. If China becomes involved, their ability to restrict emigration and the pre-eminent role of their government in the economy also makes FLOPs/$ ambiguous.</li>
<li>FLOPS are under-specified. Do you mean 64-bit precision bits? 16-bit precision? 8-bit precision? Do you count a <a href="https://wikiless.nunosempere.com/wiki/Multiply%E2%80%93accumulate_operation?lang=en">multiply-accumulate</a> operation as one FLOP or two FLOPs?</li>
</ul>
</li>
<li>Asking about what percentage of labor is automated gets tricky when, instead of automating exactly past labor, you automatize a complement. For example, instead of automatizing a restaurant as is, you design the menu and experience that is most amenable to being automated. Portable music devices don&rsquo;t automate concert halls, they provide a different experience. These differences matter when asking short-term resolvable questions about automation.</li>
<li>You might have some notion of a &ldquo;leading lab&rdquo;. But operationalizing this is tricky, and simply enumerating current &ldquo;leading labs&rdquo; risks them being sidelined by an upstart, or that list not including important Chinese labs, etc. In our case, we have operationalized &ldquo;leading lab&rdquo; as &ldquo;a lab that has performed a training run within 2 OOM of the largest ever at the time of the training run, within the last 2 years&rdquo;, which leans on the inclusive side, but requires keeping good data of what the largest training data is at each point in time, like <a href="https://epochai.org/research/ml-trends">here</a>, which might not be available in the future.</li>
</ul>
<h3>Many questions don&rsquo;t resolve until it&rsquo;s already too late</h3>
<p>Some of the questions we are most interested in, like &ldquo;will AI permanently disempower humanity&rdquo;, &ldquo;will there be a catastrophe caused by an AI system that kills >5%, or >95% of the human population&rdquo;, or &ldquo;over the long-term, will humanity manage to harness AI to bring forth a flourishing future &amp; achieve humanity&rsquo;s potential?&rdquo; don&rsquo;t resolve until it&rsquo;s already too late.</p>
<p>This adds complications, because:</p>
<ul>
<li>Using short-term proxies rather than long-term outcomes brings its own problems</li>
<li>Question resolution after transformative AI poses incentive problems. E.g., the answer incentivized by &ldquo;will we get unimaginable wealth?&rdquo; is &ldquo;no&rdquo;, because if we do get unimaginable wealth, the reward is worth less.</li>
<li>You may have <a href="https://en.wikipedia.org/wiki/Prevention_paradox">&ldquo;prevention paradox&rdquo;</a> and fixed-point problems, where asking a probability reveals that some risk is high, after which you take measures to reduce that risk. You could have asked about the probability conditional on taking no measures, but then you can&rsquo;t resolve the forecasting question.</li>
<li>You can chain forecasts, e.g., ask &ldquo;what will [another group] predict that the probability of [some future outcome] is, in one year&rdquo;. But this adds layers of indirection and increases operational burdens.</li>
</ul>
<p>Another way to frame this is that some stances about how the future of AI will go are unfalsifiable until a hypothesized treacherous turn in which humanity dies, but otherwise don&rsquo;t have strong enough views on short-term developments that they are willing to bet on short-term events. That seems to be the takeaway from the <a href="https://www.lesswrong.com/s/n945eovrA3oDueqtq">late 2021 MIRI conversations</a>, which didn&rsquo;t result in a string of $100k bets. While this is a disappointing position to be in, not sure that forecasting can do much here beyond pointing it out.</p>
<h3>More dataset gathering is needed</h3>
<p>A pillar of Tetlock-style forecasting is looking at historical frequencies and extrapolating trends. For the topic of AI, it might be interesting to do some systematic data gathering, in the style of Our World In Data-type work, on measures like:</p>
<ul>
<li>Algorithmic improvement for [chess/image classification/weather prediction/&hellip;]: how much compute do you need for equivalent performance? what performance can you get for equivalent compute?</li>
<li>Price of FLOPs</li>
<li>Size of models</li>
<li>Valuation of AI companies, number of AI companies through time</li>
<li>Number of organizations which have trained a model within 1, 2 OOM of the largest model</li>
<li>Performance on various capability benchmarks</li>
<li>Very noisy proxies: Machine learning papers uploaded to arXiv, mentions in political speeches, mentions in American legislation, Google n-gram frequency, mentions in major newspaper headlines, patents, number of PhD students, number of Sino-American collaborations, etc.</li>
<li>Answers to AI Impacts' survey of ML researchers through time</li>
<li>Funding directed to AI safety through time</li>
</ul>
<p>Note that datasets for some of these exist, but systematic data collection and presentation in the style of <a href="https://ourworldindata.org/">Our World In Data</a> would greatly simplify creating forecasting pipelines about these questions, and also produce an additional tool for figuring out &ldquo;what is going on&rdquo; at a high level with AI. As an example, there is a difference between &ldquo;Katja Grace polls ML researchers every few years&rdquo;, and &ldquo;there are pipelines in place to make sure that that survey happens regularly, and forecasting questions are automatically created five years in advance and included in forecasting tournaments with well-known rewards&rdquo;. <a href="https://epochai.org/">Epoch</a> is doing some good work in this domain.</p>
<h3>Forecasting AI hits the limits of Bayesianism in general</h3>
<p>One could answer worries about Tetlock-style forecasting by saying: sure, that particular brand of forecasting isn&rsquo;t known to work on long-term predictions. But we have good theoretical reasons to think that Bayesianism is a good model of a perfect reasoner: see for example the review of <a href="https://en.wikipedia.org/wiki/Cox%27s_theorem">Cox&rsquo;s theorem</a> in the first few chapters of <a href="https://annas-archive.org/md5/ddec0cf1982afa288d61db3e1f7d9323">Probability Theory. The Logic of Science</a>. So the thing that we should be doing is some version of subjective Bayesianism: keeping track of evidence and expressing and sharpening our beliefs with further evidence. See <a href="https://nunosempere.com/blog/2022/08/31/on-cox-s-theorem-and-probabilistic-induction/">here</a> for a blog post making this argument in more length, though still informally.</p>
<p>But Bayesianism is a good model of a perfect reasoner with <em>infinite compute</em> and <em>infinite memory</em>, and in particular access to a bag of hypotheses which contains the true hypothesis. However, humans don&rsquo;t have infinite compute, and sometimes don&rsquo;t have the correct hypothesis in mind. <a href="https://en.wikipedia.org/wiki/Knightian_uncertainty">Knightian uncertainty</a> and <a href="https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions">Kuhnian revolutions</a><sup id="fnref:1"><a href="#fn:1" rel="footnote">1</a></sup>, <a href="https://en.wikipedia.org/wiki/Black_swan_theory">Black swans</a> or <a href="https://en.wikipedia.org/wiki/Ambiguity_aversion">ambiguity aversion</a> can be understood as consequences of normally being able to get around being approximately Bayesian, but sometimes getting bitten by that approximation being bounded and limited.</p>
<p>So there are some situations where we can get along by being approximately Bayesian, like coin flips and blackjack tables, domains where we pull our hairs and accept that we don&rsquo;t have infinite compute, like maybe some turbulent and chaotic physical systems or trying to predict dreams. Then we have some domains in which our ability to predict is meaningfully improving with time, like for example weather forecasts, where we can throw supercomputers and PhD students at it, because we care.</p>
<p>Now the question is where AI in particular falls within that spectrum. Personally, I suspect that it is a domain in which we are likely to not have the correct hypothesis in our prior set of hypotheses. For example, observers in general, but also the <a href="https://intelligence.org/">Machine Intelligence Research Institute</a> in particular, failed to predict the rise of LLMs and to orient their efforts into making such systems safer, or into preventing such systems from coming into existence. I think this tweet, though maybe meant to be hurtful, is also informative about how tricky of a domain predicting AI progress is:</p>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">eliezer has IMO done more to accelerate AGI than anyone else.<br><br>certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc.</p>&mdash; Sam Altman (@sama) <a href="https://twitter.com/sama/status/1621621724507938816?ref_src=twsrc%5Etfw">February 3, 2023</a></blockquote>
<p> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>However, consider the following caveat: imagine that instead of being interested in AI progress, we were interested in social science, and concerned that they couldn&rsquo;t arrive at the correct conclusion in cases where it was Republican-flavored. Then, one could notice that moving from p-values to likelihood ratios and Bayesian calculations wouldn&rsquo;t particularly help, since Bayesianism doesn&rsquo;t work unless your prior assigns a sufficiently high prior probability to the correct hypothesis. In this case, I think one easy mistake to make might be to just shrug and keep using p-values.</p>
<p>Similarly, for AI progress, one could notice that there is this subtle critique of forecasting and Bayesianism, and move to using, I don&rsquo;t know, scenario planning, which arguendo could be even worse, assume even more strongly that you know the shape of events to come, or not provide mechanisms for noticing that none of your hypotheses are worth much. I think that would be a mistake.</p>
<h3>Forecasting also has a bunch of other limitations as a genre</h3>
<p>You can see forecasting as a type of genre. In it, someone writes a forecasting question, that question is deemed sufficiently robust, and then forecasters produce probabilities on it. As a genre, it has some limitations. For instance, when curious about a topic, not all roads lead to forecasting questions, and working in a project such that you <em>have</em> to produce forecasting questions could be oddly limited.</p>
<p>The conventions of the forecasting genre also dictate that forecasters will spend a fairly short amount of time researching before making a prediction. Partly this is a result of, for example, the scoring rule in Metaculus, which incentivized forecasting on many questions. Partly this is because forecasting platforms don&rsquo;t generally pay their forecasters, and even those that are <a href="https://www.openphilanthropy.org/grants/applied-research-laboratory-for-intelligence-and-security-forecasting-platforms/">well funded</a> pay their forecasters badly, which leads to forecating being a hobby, rather than a full-time occupation. If one thinks that some questions require one to dig deep, and that one will otherwise easily produce shitty forecasts, this might be a particularly worrying feature of the genre.</p>
<p>Perhaps also as a result of its unprofitability, the forecasting community has also tended to see a large amount of churn, as hobbyist forecasters rise up in their regular careers and it becomes more expensive for them in terms of income lost to forecast on online platforms. You also see this churn in terms of employees of these forecasting platforms, where maybe someone creates some new project—e.g., Replication Markets, Metaculus' AI Progress Tournament, Ought&rsquo;s Elicit, etc.—but then that project dies as its principal person moves on to other topics.</p>
<p>Forecasting also makes use of scoring rules, which aim to reward forecasters such that they will be incentivized to input their true probabilities. Sadly, these often have the effect of incentivizing people to not collaborate and share information. This can be fixed by using more capital-intensive scoring rules that incentivize collaboration, like <a href="https://github.com/SamotsvetyForecasting/optimal-scoring">these ones</a> or by grouping forecasters into teams such that they will be incentivized to share information within a team.</p>
<h3>As an aside, here is a casual review of the track record of long-term predictions</h3>
<p>If we review the track record of superforecasters on longer term questions, we find that&hellip; there isn&rsquo;t that much evidence here—remember that the <a href="https://wikiless.nunosempere.com/wiki/Aggregative_Contingent_Estimation_Program?lang=en">ACE program</a> started in 2010. In <em>Superforecasting</em> (2015), Tetlock wrote:</p>
<blockquote><p>Taleb, Kahneman, and I agree there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious—“there will be conflicts”—and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out. And yet, this sort of forecasting is common, even within institutions that should know better.</p></blockquote>
<p>However, in p. 33 of <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4377599">Long-Range Subjective-Probability Forecasts of Slow-Motion Variables in World Politics: Exploring Limits on Expert Judgment</a> (2023), we see that the experts predicting &ldquo;slow-motion variables&rdquo; 25 years into the future attain a Brier score of 0.07, which isn&rsquo;t terrible.</p>
<p>Karnofsky, the erstwhile head-honcho of Open Philanthropy, <a href="https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/">spins</a> some research by Arb and others as saying that the track record of futurists is &ldquo;fine&rdquo;. <a href="https://danluu.com/futurist-predictions/">Here</a> is a more thorough post by Dan Luu which concludes that:</p>
<blockquote><p>&hellip;people who were into &ldquo;big ideas&rdquo; who use a few big hammers on every prediction combined with a cocktail party idea level of understanding of the particular subject to explain why a prediction about the subject would fall to the big hammer generally fared poorly, whether or not their favored big ideas were correct. Some examples of &ldquo;big ideas&rdquo; would be &ldquo;environmental doomsday is coming and hyperconservation will pervade everything&rdquo;, &ldquo;economic growth will create near-infinite wealth (soon)&rdquo;, &ldquo;Moore&rsquo;s law is supremely important&rdquo;, &ldquo;quantum mechanics is supremely important&rdquo;, etc. Another common trait of poor predictors is lack of anything resembling serious evaluation of past predictive errors, making improving their intuition or methods impossible (unless they do so in secret). Instead, poor predictors often pick a few predictions that were accurate or at least vaguely sounded similar to an accurate prediction and use those to sell their next generation of predictions to others.</p>
<p>By contrast, people who had (relatively) accurate predictions had a deep understanding of the problem and also tended to have a record of learning lessons from past predictive errors. Due to the differences in the data sets between this post and Tetlock&rsquo;s work, the details are quite different here. The predictors that I found to be relatively accurate had deep domain knowledge and, implicitly, had access to a huge amount of information that they filtered effectively in order to make good predictions. Tetlock was studying people who made predictions about a wide variety of areas that were, in general, outside of their areas of expertise, so what Tetlock found was that people really dug into the data and deeply understood the limitations of the data, which allowed them to make relatively accurate predictions. But, although the details of how people operated are different, at a high-level, the approach of really digging into specific knowledge was the same.</p></blockquote>
<h3>In comparison with other mechanisms for making sense of future AI developments, forecasting does OK.</h3>
<p>Here are some mechanisms that the Effective Altruism community has historically used to try to make sense of possible dangers stemming from future AI developments:</p>
<ul>
<li>Books, like Bostrom&rsquo;s <em>Superintelligence</em>, which focused on the abstract properties of highly intelligent and capable agents in the limit.</li>
<li><a href="https://www.openphilanthropy.org/research/?q=&amp;focus-area%5B%5D=potential-risks-advanced-ai&amp;content-type%5B%5D=research-reports">Reports</a> by Open Philanthropy. They either try to model AI progress in some detail, like <a href="https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines">example 1</a>, or look at priors on technological development, like <a href="https://www.openphilanthropy.org/research/semi-informative-priors-over-ai-timelines/">example 2</a>.</li>
<li>Mini think tanks, like Rethink Priorities, Epoch or AI impacts, which produce their own research and reports.</li>
<li>Larger think tanks, like CSET, which produce reports like <a href="https://cset.georgetown.edu/publication/future-indices/">this one</a> on Future Indices.</li>
<li>Online discussion on lesswrong.com, that typically assumes things like: intelligence gains would be fast and explosive, that we should aim to design a mathematical construction that guarantees safety, that iteration would not be advisable in the face of fast intelligence gains, etc.</li>
<li>Occasionally, theoretical or mathematical arguments or models of risk.</li>
<li>One-off projects, like Drexler&rsquo;s <a href="https://www.fhi.ox.ac.uk/reframing/">Comprehensive AI systems</a></li>
<li>Questions on forecasting platforms, like Metaculus, that try to solidly operationalize possible AI developments and dangers, and ask their forecasters to anticipate when and whether they will happen.</li>
<li>Writeups from forecasting groups, like <a href="https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts">Samotsvety</a></li>
<li>More recently, the Forecasting Research Institute&rsquo;s <a href="https://forecastingresearch.org/xpt">existential risk tournament/experiment writeup</a>, which has tried to translate geopolitical forecasting mechanisms to predicting AI progress, with mixed success.</li>
<li>Deferring to intellectuals, ideologues, and cheerleaders, like Toby Ord, Yudkowsky or MacAskill.</li>
</ul>
<p>None of these options, as they currently exist, seem great. Forecasting has the hurdles discussed above, but maybe other mechanisms have even worse downsides, particularly the more pundit-like ones. Conversely, forecasting will be worse than deferring to a brilliant theoretical mind that is able to grasp the dynamics and subtleties of future AI development, like perhaps Drexler&rsquo;s on a good day.</p>
<p>Anyways, you might think that this forecasting thing shows potential. Were you a billionnaire, money would not be a limitation for you, so&hellip;</p>
<h3>In this situation, here are some strategies of which you might avail yourself</h3>
<h4>A. Accept the Faustian bargain</h4>
<ol>
<li>Make a bunch of short-term and long-term forecasting questions on AI progress</li>
<li>Wait for the short-term forecasting questions to resolve</li>
<li>Weight the forecasts for the long-term questions according to accuracy in the short term questions</li>
</ol>
<p>This is a Faustian bargain because of the reasons reviewed above, chiefly that short-term forecasting performance is not a guarantee of longer term forecasting performance. A cheap version of this would be to look at the best short-term forecasters on the AI categories on Metaculus, and report their probabilities on a few AI and existential risk questions, which would be more interpretable than the current opaque &ldquo;Metaculus prediction&rdquo;.</p>
<p>If you think that your other methods of making sense of what it&rsquo;s going on are sufficiently bad, you could choose this and hope for the best? Or, conversely, you could anchor your beliefs on a weighted aggregate of the best short-term forecasters and the most convincing theoretical views. Maybe things will be fine?</p>
<h4>B. Attempt to do a Bayesianism</h4>
<p>Go to the effort of rigorously formulating hypotheses, then keep track of incoming evidence for each hypothesis. If a new hypothesis comes in, try to do some version of <a href="https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/">just-in-time Bayesianism</a>, i.e., monkey-patch it after the fact. Once you are specifying your beliefs numerically, you can deploy some cute incentive mechanisms and <a href="https://github.com/SamotsvetyForecasting/optimal-scoring/blob/master/3-amplify-bayesian/amplify-bayesian.pdf">reward people who change your mind</a>.</p>
<p>Hope that keeping track of hypotheses about the development of AI at least gives you some discipline, and enables you to shed untrue hypotheses or frames a bit earlier than you otherwise would have. Have the discipline to translate the worldviews of various pundits into specific probabilities<sup id="fnref:2"><a href="#fn:2" rel="footnote">2</a></sup>, and listen to them less when their predictions fail to come. And hope that going to the trouble of doing things that way allows you to anticipate stuff 6 months to 2 years sooner than you would have otherwise, and that it is worth the cost.</p>
<h4>C. Invest in better prediction pipelines as a whole</h4>
<p>Try to build up some more speculative and <a href="https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/">formidable</a> type of forecasting that can deal with the hurdles above. Be more explicit about the types of decisions that you want better foresight for, realize that you don&rsquo;t have the tools you need, and build someone up to be that for you.</p>
<div class="footnotes">
<hr/>
<ol>
<li id="fn:1">
To spell this out more clearly, Kuhn was looking at the structure of scientific revolutions, and he notices that you have these &ldquo;paradigm changes&rdquo; every now in a while. As a naïve Bayesian, those paradigm changes are kinda confusing, and shouldn&rsquo;t have any special status. You should just have hypotheses, and they should just rise and fall in likelihood according to Bayes rule. But as a Bayesian who knows he has finite compute/memory, you can think of Kuhnian revolutions as encountering a true hypothesis which was outside your previous hypothesis space, and having to recalculate. On this topic, see <a href="https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/">Just-in-time Bayesianism</a> or <a href="https://nunosempere.com/blog/2023/03/01/computable-solomonoff/">A computable version of Solomonoff induction</a>.<a href="#fnref:1" rev="footnote">&#8617;</a></li>
<li id="fn:2">
Back in the day, Tetlock received a <a href="https://www.openphilanthropy.org/grants/university-of-pennsylvania-philip-tetlock-on-forecasting/#2-about-the-grant">grant</a> to &ldquo;systematically convert vague predictions made by prominent pundits into explicit numerical forecasts&rdquo;, but I haven&rsquo;t been able to track what happened to it, and I suspect it never happened.<a href="#fnref:2" rev="footnote">&#8617;</a></li>
</ol>
</div>
<p>
<section id='isso-thread'>
<noscript>javascript needs to be activated to view comments.</noscript>
</section>
</p>

@ -1,31 +1,40 @@
## In 2023...
- [Forecasting Newsletter for November and December 2022](https://nunosempere.com/2023/01/07/forecasting-newsletter-november-december-2022)
- [Can GPT-3 produce new ideas? Partially automating Robin Hanson and others](https://nunosempere.com/2023/01/11/can-gpt-produce-ideas)
- [Prevalence of belief in "human biodiversity" amongst self-reported EA respondents in the 2020 SlateStarCodex Survey](https://nunosempere.com/2023/01/16/hbd-ea)
- [Interim Update on QURI's Work on EA Cause Area Candidates](https://nunosempere.com/2023/01/19/interim-update-cause-candidates)
- [There will always be a Voigt-Kampff test](https://nunosempere.com/2023/01/21/there-will-always-be-a-voigt-kampff-test)
- [My highly personal skepticism braindump on existential risk from artificial intelligence.](https://nunosempere.com/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk)
- [An in-progress experiment to test how Laplaces rule of succession performs in practice.](https://nunosempere.com/2023/01/30/an-in-progress-experiment-to-test-how-laplace-s-rule-of)
- [Effective Altruism No Longer an Expanding Empire.](https://nunosempere.com/2023/01/30/ea-no-longer-expanding-empire)
- [no matter where you stand](https://nunosempere.com/2023/02/03/no-matter-where-you-stand)
- [Just-in-time Bayesianism](https://nunosempere.com/2023/02/04/just-in-time-bayesianism)
- [Impact markets as a mechanism for not loosing your edge](https://nunosempere.com/2023/02/07/impact-markets-sharpen-your-edge)
- [Straightforwardly eliciting probabilities from GPT-3](https://nunosempere.com/2023/02/09/straightforwardly-eliciting-probabilities-from-gpt-3)
- [Inflation-proof assets](https://nunosempere.com/2023/02/11/inflation-proof-assets)
- [A Bayesian Adjustment to Rethink Priorities' Welfare Range Estimates](https://nunosempere.com/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates)
- [A computable version of Solomonoff induction](https://nunosempere.com/2023/03/01/computable-solomonoff)
- [Use of &ldquo;I'd bet&rdquo; on the EA Forum is mostly metaphorical](https://nunosempere.com/2023/03/02/metaphorical-bets)
- [Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges](https://nunosempere.com/2023/03/08/winners-of-the-squiggle-experimentation-and-80-000-hours)
- [What happens in Aaron Sorkin's *The Newsroom*](https://nunosempere.com/2023/03/10/aaron-sorkins-newsroom)
- [Estimation for sanity checks](https://nunosempere.com/2023/03/10/estimation-sanity-checks)
- [Find a beta distribution that fits your desired confidence interval](https://nunosempere.com/2023/03/15/fit-beta)
- [Some estimation work in the horizon](https://nunosempere.com/2023/03/20/estimation-in-the-horizon)
- [Soothing software](https://nunosempere.com/2023/03/27/soothing-software)
- [What is forecasting?](https://nunosempere.com/2023/04/03/what-is-forecasting)
- [Things you should buy, quantified](https://nunosempere.com/2023/04/06/things-you-should-buy-quantified)
- [General discussion thread](https://nunosempere.com/2023/04/08/general-discussion-april)
- [A Soothing Frontend for the Effective Altruism Forum ](https://nunosempere.com/2023/04/18/forum-frontend)
- [A flaw in a simple version of worldview diversification](https://nunosempere.com/2023/04/25/worldview-diversification)
- [Review of Epoch's *Scaling transformative autoregressive models*](https://nunosempere.com/2023/04/28/expert-review-epoch-direct-approach)
- [Updating in the face of anthropic effects is possible](https://nunosempere.com/2023/05/11/updating-under-anthropic-effects)
- [Forecasting Newsletter for November and December 2022](https://nunosempere.com/blog/2023/01/07/forecasting-newsletter-november-december-2022)
- [Can GPT-3 produce new ideas? Partially automating Robin Hanson and others](https://nunosempere.com/blog/2023/01/11/can-gpt-produce-ideas)
- [Prevalence of belief in "human biodiversity" amongst self-reported EA respondents in the 2020 SlateStarCodex Survey](https://nunosempere.com/blog/2023/01/16/hbd-ea)
- [Interim Update on QURI's Work on EA Cause Area Candidates](https://nunosempere.com/blog/2023/01/19/interim-update-cause-candidates)
- [There will always be a Voigt-Kampff test](https://nunosempere.com/blog/2023/01/21/there-will-always-be-a-voigt-kampff-test)
- [My highly personal skepticism braindump on existential risk from artificial intelligence.](https://nunosempere.com/blog/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk)
- [An in-progress experiment to test how Laplaces rule of succession performs in practice.](https://nunosempere.com/blog/2023/01/30/an-in-progress-experiment-to-test-how-laplace-s-rule-of)
- [Effective Altruism No Longer an Expanding Empire.](https://nunosempere.com/blog/2023/01/30/ea-no-longer-expanding-empire)
- [no matter where you stand](https://nunosempere.com/blog/2023/02/03/no-matter-where-you-stand)
- [Just-in-time Bayesianism](https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism)
- [Impact markets as a mechanism for not loosing your edge](https://nunosempere.com/blog/2023/02/07/impact-markets-sharpen-your-edge)
- [Straightforwardly eliciting probabilities from GPT-3](https://nunosempere.com/blog/2023/02/09/straightforwardly-eliciting-probabilities-from-gpt-3)
- [Inflation-proof assets](https://nunosempere.com/blog/2023/02/11/inflation-proof-assets)
- [A Bayesian Adjustment to Rethink Priorities' Welfare Range Estimates](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates)
- [A computable version of Solomonoff induction](https://nunosempere.com/blog/2023/03/01/computable-solomonoff)
- [Use of &ldquo;I'd bet&rdquo; on the EA Forum is mostly metaphorical](https://nunosempere.com/blog/2023/03/02/metaphorical-bets)
- [Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges](https://nunosempere.com/blog/2023/03/08/winners-of-the-squiggle-experimentation-and-80-000-hours)
- [What happens in Aaron Sorkin's *The Newsroom*](https://nunosempere.com/blog/2023/03/10/aaron-sorkins-newsroom)
- [Estimation for sanity checks](https://nunosempere.com/blog/2023/03/10/estimation-sanity-checks)
- [Find a beta distribution that fits your desired confidence interval](https://nunosempere.com/blog/2023/03/15/fit-beta)
- [Some estimation work in the horizon](https://nunosempere.com/blog/2023/03/20/estimation-in-the-horizon)
- [Soothing software](https://nunosempere.com/blog/2023/03/27/soothing-software)
- [What is forecasting?](https://nunosempere.com/blog/2023/04/03/what-is-forecasting)
- [Things you should buy, quantified](https://nunosempere.com/blog/2023/04/06/things-you-should-buy-quantified)
- [General discussion thread](https://nunosempere.com/blog/2023/04/08/general-discussion-april)
- [A Soothing Frontend for the Effective Altruism Forum ](https://nunosempere.com/blog/2023/04/18/forum-frontend)
- [A flaw in a simple version of worldview diversification](https://nunosempere.com/blog/2023/04/25/worldview-diversification)
- [Review of Epoch's *Scaling transformative autoregressive models*](https://nunosempere.com/blog/2023/04/28/expert-review-epoch-direct-approach)
- [Updating in the face of anthropic effects is possible](https://nunosempere.com/blog/2023/05/11/updating-under-anthropic-effects)
- [Relative values for animal suffering and ACE Top Charities](https://nunosempere.com/blog/2023/05/29/relative-value-animals)
- [People's choices determine a partial ordering over people's desirability](https://nunosempere.com/blog/2023/06/17/ordering-romance)
- [Betting and consent](https://nunosempere.com/blog/2023/06/26/betting-consent)
- [Some melancholy about the value of my work depending on decisions by others beyond my control](https://nunosempere.com/blog/2023/07/13/melancholy)
- [Why are we not harder, better, faster, stronger?](https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger)
- [squiggle.c](https://nunosempere.com/blog/2023/08/01/squiggle.c)
- [Webpages I am making available to my corner of the internet](https://nunosempere.com/blog/2023/08/14/software-i-am-hosting)
- [Incorporate keeping track of accuracy into X (previously Twitter)](https://nunosempere.com/blog/2023/08/19/keep-track-of-accuracy-on-twitter)
- [Quick thoughts on Manifund's application to Open Philanthropy](https://nunosempere.com/blog/2023/09/05/manifund-open-philanthropy)

@ -7,7 +7,7 @@ for dir in */*/*
do
index_path="$(pwd)/$dir/index.md"
title="$(cat $index_path | head -n 1)"
url="https://nunosempere.com/$year/$dir"
url="https://nunosempere.com/blog/$year/$dir"
# echo $dir
# echo $index_path
# echo $title

@ -1,13 +1,14 @@
Consulting
==========
<img src="https://images.nunosempere.com/consulting/shapley.png" class="img-frontpage-center">
This page presents my core competencies, my consulting rates, my description of my ideal client, two testimonials, and a few further thoughts. I can be reached out to at nuno.semperelh@protonmail.com.
Shapley Maximizers is a niche estimation, evaluation and impact auditing consultancy run by myself, Nuño Sempere, but increasingly also with support from collaborators. This page presents our core competencies, consulting rates, description of ideal clients, testimonials, and a few further thoughts.
In short, we are looking to support people and smallish organizations who are already producing value, and who want to add clarity and improve their prioritization through estimation, measure and good judgment. You can reach out to to nuno.semperelh@protonmail.com.
### Core competencies
#### Researching:
#### Research
Some past research outputs that I am proud of are
Some past research outputs:
- [Incentive Problems With Current Forecasting Competitions.](https://forum.effectivealtruism.org/posts/ztmBA8v6KvGChxw92/incentive-problems-with-current-forecasting-competitions)
- [Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems)](https://www.lesswrong.com/posts/6bSjRezJDxR2omHKE/real-life-examples-of-prediction-systems-interfering-with)
@ -42,16 +43,21 @@ For a smaller example, in the past I've really enjoyed doing subjective estimate
I am happy to host workshops, or advise on tournament or forecasting platform design. If you are looking for specific forecasts, you probably want to hire [Samotsvety](https://samotsvety.org/) instead, which can be reached out at info@samotsvety.org.
### Advice to individuals
On occasion, I've enjoyed talking with entrepreneurial individuals about how they could make better career decisions, or better prioritization in the work they are already doing. In some cases this has lead to ongoing engagements. If you have a need for incisive advice on your career, feel free to reach out.
### Rates
I deeply value getting hired for more hours, because each engagement has some overhead cost. Therefore, I am deeply discounting buying a larger number of hours.
I value getting hired for more hours, because each engagement has some negotiation, preparation and administrative burden. Therefore, I am discounting buying a larger number of hours.
| # of hours | Cost | Example |
|------------|-------|------------------------------------------------------------------------------------------------|
| 1 hour | ~$200 | Talk to me for an hour about a project you want my input on, organize a forecasting workshop |
| 10 hours | ~$1.5k | Research that draws on my topics of expertise, where I have already thought about the topic, and just have to write it down. For example, [this Bayesian adjustment to Rethink Priorities](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/) |
| 100 hours | ~$10k | An [evaluation of an organization](https://forum.effectivealtruism.org/posts/kTLR23dFRB5pJryvZ/external-evaluation-of-the-ea-wiki), an early version of [metaforecast](https://metaforecast.org), two editions of the [forecasting newsletter](https://forecasting.substack.com/) |
| 1000 hours | reach out | Large research project, an ambitious report on a novel topic, the current iteration of [metaforecast](https://metaforecast.org) |
| Size of project | Cost | ~hours | Example |
| ------- | ---------- | ----- | --- |
| One-off (discounted) | $100 | 1h | You, an early career person, talk with me for an hour about a career decision you are about to make, about a project you want my input on, etc. |
| Small | $500 | 2h | You, a titan of industry, pick my bran about a project you want my input on, before which I spend a few hours of preparation. Or, you have me organize a two-hour forecasting workshop and a one hour chat with your underlings. |
| Focused | $2k | 10h | Research that draws on my topics of expertise, where I have already thought about the topic, and just have to write it down. For example, [this Bayesian adjustment to Rethink Priorities](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/) |
| Major | $15k | 100h | An [evaluation of an organization](https://forum.effectivealtruism.org/posts/kTLR23dFRB5pJryvZ/external-evaluation-of-the-ea-wiki), many [shallow evaluations](https://forum.nunosempere.com/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations), or an early version of [metaforecast](https://metaforecast.org), two editions of the [forecasting newsletter](https://forecasting.substack.com/) |
| Custom | reach out | 1000h | Large research project, an ambitious report on a novel topic, the current iteration of [metaforecast](https://metaforecast.org) |
### Description of client
@ -72,9 +78,15 @@ My anti-client on the other hand would be someone who has already made up their
>
> &mdash;Jaime Sevilla
<br>
> Nuño took on an extremely underspecified and wicked problem and delivered to everyone's satisfaction. If you're reading this page you probably already know about his relentless clarity, but you might not be aware of how much he cares about the work and how good at reading people he is.
>
> &mdash;Gavin Leech
### Further details
- I am very amenable to taking on projects that would require more than one person, because I am able to bring in collaborators. I would discuss this with the potential client beforehand.
- Operationally, payouts may go either to the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/) (a 503c charity in the US), or to my own consultancy, Shapley Maximizers ÖU, to be decided.
[^1]: Note that I don't think all crypto projects are grifty, and I in fact view crypto as a pretty promising area of innovation. It's just that for the last couple years if you wanted to grift, crypto was a good place to do so. And in fact a crypto project that wanted to figure out how to produce more value in the world and capture some fraction of it could be a great client.

@ -1,10 +1,16 @@
## Forecasting
> Magic ball tell us all.
>
> — Misha Yagudin
### Samotsvety
My forecasting group is known as Samotsvety. You can read more about it [here](https://samotsvety.org/).
### Newsletter
I am perhaps most well-known for my monthly forecasting _newsletter_. It can be found both [on Substack](https://forecasting.substack.com/) and [on the EA Forum](https://forum.effectivealtruism.org/s/HXtZvHqsKwtAYP6Y7). Besides its mothly issues, I've also written:
I was reasonably well-known for my monthly forecasting _newsletter_. It can be found both [on Substack](https://forecasting.substack.com/) and [on the EA Forum](https://forum.effectivealtruism.org/s/HXtZvHqsKwtAYP6Y7). Besides its mothly issues, I've also written:
- [Tracking the money flows in forecasting](https://nunosempere.com/blog/2022/11/06/forecasting-money-flows/)
- [Looking back at 2021](https://forecasting.substack.com/p/looking-back-at-2021)
@ -13,7 +19,7 @@ I am perhaps most well-known for my monthly forecasting _newsletter_. It can be
### Research
As part of my research at the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/), and previously as an independent researcher, I have a few _in depth pieces_ on forecasting:
I have a few _in depth pieces_ on forecasting, many written during my time at the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/), and previously as an independent researcher,
- [Alignment Problems With Current Forecasting Platforms](https://arxiv.org/abs/2106.11248)
- [Amplifying generalist research via forecasting models of impact and challenges](https://forum.effectivealtruism.org/posts/ZCZZvhYbsKCRRDTct/part-1-amplifying-generalist-research-via-forecasting-models) and [part 2](https://forum.effectivealtruism.org/posts/ZTXKHayPexA6uSZqE/part-2-amplifying-generalist-research-via-forecasting).
@ -22,9 +28,13 @@ As part of my research at the [Quantified Uncertainty Research Institute](https:
- [Introducing Metaforecast: A Forecast Aggregator and Search Tool](https://forum.effectivealtruism.org/posts/tEo5oXeSNcB3sYr8m/introducing-metaforecast-a-forecast-aggregator-and-search)
- [Introduction to Fermi estimates](https://nunosempere.com/blog/2022/08/20/fermi-introduction/)
- [Pathways to impact for forecasting and evaluation](https://forum.effectivealtruism.org/posts/oXrTQpZyXkEbTBfB6/pathways-to-impact-for-forecasting-and-evaluation)
- [Hurdles of using forecasting as a tool for making sense of AI progress](https://nunosempere.com/blog/2023/11/07/hurdles-forecasting-ai/)
I also have a few _minor pieces_:
- [Betting and consent](https://nunosempere.com/blog/2023/06/26/betting-consent/)
- [Incorporate keeping track of accuracy into X (previously Twitter)](https://nunosempere.com/blog/2023/08/19/keep-track-of-accuracy-on-twitter/)
- [Use of “Id bet” on the EA Forum is mostly metaphorical](https://nunosempere.com/blog/2023/03/02/metaphorical-bets/)
- [Metaforecast update: Better search, capture functionality, more platforms.](https://www.lesswrong.com/posts/5hugQzRhdGYc6ParJ/metaforecast-update-better-search-capture-functionality-more)
- [Incentive Problems With Current Forecasting Competitions](https://forum.effectivealtruism.org/posts/ztmBA8v6KvGChxw92/incentive-problems-with-current-forecasting-competitions)
- [Impact markets as a mechanism for not loosing your edge](https://nunosempere.com/blog/2023/02/07/impact-markets-sharpen-your-edge/)
@ -44,18 +54,19 @@ I also mantain [this database](https://docs.google.com/spreadsheets/d/1XB1GHfizN
- [Just-in-time Bayesianism](https://nunosempere.com/blog/2023/02/04/just-in-time-bayesianism/)
- [A computable version of Solomonoff induction](https://nunosempere.com/blog/2023/03/01/computable-solomonoff/)
I also discussed some of these considerations in [Hurdles of using forecasting as a tool for making sense of AI progress](https://nunosempere.com/blog/2023/11/07/hurdles-forecasting-ai/). They turn out to be important.
### Funding
I have occasionally advised philanthropic funders—mostly from the effective altruism community—on forecasting related topics and projects.
I have run a few contests:
- [$5k challenge to quantify the impact of 80,000 hours' top career paths](https://nunosempere.com/blog/2022/09/23/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top/)
- [Announcing the Forecasting Innovation Prize](https://forum.effectivealtruism.org/posts/8Nwy3tX2WnDDSTRoi/announcing-the-forecasting-innovation-prize)
- [We are giving $10k as forecasting micro-grants](https://forum.effectivealtruism.org/posts/oqFa8obfyEmvD79Jn/we-are-giving-usd10k-as-forecasting-micro-grants)
- [$1,000 Squiggle Experimentation Challenge](https://forum.effectivealtruism.org/posts/ZrWuy2oAxa6Yh3eAw/usd1-000-squiggle-experimentation-challenge)
- [$5k challenge to quantify the impact of 80,000 hours' top career paths](https://forum.effectivealtruism.org/posts/noDYmqoDxYk5TXoNm/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top)
- I'm in the process of regranting $50k on [Manifund](https://manifund.org/NunoSempere)
### Squiggle
I've done bunch of work around Squiggle, a language for creating quick probability estimates. You can read about this [here](https://forum.effectivealtruism.org/topics/squiggle).
I did a bunch of work around Squiggle, a language for creating quick probability estimates. You can read about this [here](https://forum.effectivealtruism.org/topics/squiggle). Unsatisfied with it, I [tried out many different languages](https://git.nunosempere.com/personal/time-to-botec) and [wrote my own version in C](https://git.nunosempere.com/personal/squiggle.c).

@ -1,5 +1,7 @@
## Gossip
_2023/08/21_: I've gotten [$50k](https://manifund.org/NunoSempere) to regrant through Manifund. For now, pitches welcome: I can be contacted at nuno dot sempere dot lh at protonmail dot com.
_2023/05/6_: I have added a [consulting](/consulting) page.
_2023/01/29_: I've updated the forecasting and research pages in this website, they should now be a bit more up to date.

@ -1,4 +1,4 @@
I'm Nu&#xF1;o Sempere. I [do research](https://quantifieduncertainty.org/), [write software](https://github.com/NunoSempere/), and [predict the future](https://samotsvety.org/).
I'm Nu&#xF1;o Sempere. I [do research](https://nunosempere.com/blog), [write software](https://github.com/NunoSempere/), and [predict the future](https://samotsvety.org/).
<img src="https://images.nunosempere.com/top/me.jpg" alt="image of myself" class="img-frontpage-center">
@ -10,4 +10,4 @@ I'm Nu&#xF1;o Sempere. I [do research](https://quantifieduncertainty.org/), [wri
### Readers might also wish to...
...read the [gossip](/gossip) page, visit my [blog](/blog), or [sign up for my newsletter](/.newsletter)
...read the [gossip](/gossip) page, visit my [blog](/blog).

@ -5,4 +5,5 @@ This file is used for testing the werc framework. The symbols below are probably
En un lugar de la Mancha de cuyo nombre no quiero acordarme no ha mucho tiempo que vivía
........
...
........
........

@ -0,0 +1,8 @@
## About
A repository for tests & diagnostics.
The symbols below are probably arbitrary:
---
un caballero de los de lanza en astillero

@ -0,0 +1,212 @@
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.0.0-alpha1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-csv/0.71/jquery.csv-0.71.min.js"></script>
<!--
Sources:
+ https://gist.github.com/cmatskas/8725a6ee4f5f1a8e1cea
+ cdnjs
-->
<script type="text/javascript">
$(document).ready(function() {
// The event listener for the file upload
document.getElementById('txtFileUpload').addEventListener('change', upload, false);
document.getElementById('txtFileUpload').addEventListener('click', reset, false);
function reset(){
document.getElementById("txtFileUpload").value = null;
// This way, the event change fires even if you upload the same file twice
}
// Method that checks that the browser supports the HTML5 File API
function browserSupportFileUpload() {
var isCompatible = false;
if (window.File && window.FileReader && window.FileList && window.Blob) {
isCompatible = true;
}
return isCompatible;
}
// Method that reads and processes the selected file
function upload(evt) {
uploadedSameFileTwice = false;
if (!browserSupportFileUpload()) {
alert('The File APIs are not fully supported in this browser!');
} else {
// alert("Checkpoint Charlie");
// var data = null;
data = null;
var file = evt.target.files[0];
var reader = new FileReader();
reader.readAsText(file);
reader.onload = function(event) {
var csvData = event.target.result;
data = $.csv.toArrays(csvData);
if (data && data.length > 0) {
// alert('Imported -' + data.length + '- rows successfully!');
wrapperProportionalApprovalVoting(data);
} else {
alert('No data to import!');
}
};
reader.onerror = function() {
alert('Unable to read ' + file.fileName);
};
}
}
function wrapperProportionalApprovalVoting(data){
let dataColumn1 = data.map(x => x[1]);
// this gets us the first column (columns start at 0).
// data[][1] breaks the thing without throwing an error in the browser.
let dataColumn1Split = dataColumn1.map( element => element.split(", "));
// One row of the first column might be "Candidate1, Candidate2".
// This transforms it to ["Candidate1", "Candidate2"]
let uniqueCandidates = findUnique(dataColumn1Split);
// Finds all the candidates
// In this voting method, all voters start with a weight of 1, which changes as candidates are elected
// So that voters who have had one of their candidates elected have less influence for the next candidates.
let weights = Array(dataColumn1Split.length).fill(1);
// Find the most popular one, given the weights. Update the weights
//alert("\n"+dataColumn1Split[0]);
let n = document.getElementById("numWinners").value;
let winners = [];
for(i=0; i<n; i++){
let newWinner = findTheNextMostPopularOneGivenTheWeights(dataColumn1Split, weights, uniqueCandidates, winners);
winners.push(newWinner);
weights = updateWeightsGivenTheNewWinner(dataColumn1Split, weights, newWinner);
}
//alert(winners);
// Display the winners.
displayWinners(winners);
}
function displayWinners(winners){
// Header
// Ordered list with the winners
///alert(document.getElementsByTagName("OL")[0]);
if(document.getElementsByTagName("OL")[0]==undefined){
headerH3 = document.createElement("h3");
headerH3.innerHTML = "Winners under Proportional Approval Voting:";
document.getElementById("results").appendChild(headerH3);
orderedList = document.createElement("OL"); // Creates an ordered list
for(let i =0; i<winners.length; i++){
HTMLWinner = document.createElement("li");
HTMLWinner.appendChild(document.createTextNode(winners[i]));
orderedList.appendChild(HTMLWinner);
}
document.getElementById("results").appendChild(orderedList);
}else{
oldOL = document.getElementsByTagName("OL")[0];
oldOL.remove();
orderedList = document.createElement("OL"); // Creates an ordered list
for(let i =0; i<winners.length; i++){
HTMLWinner = document.createElement("li");
HTMLWinner.appendChild(document.createTextNode(winners[i]));
orderedList.appendChild(HTMLWinner);
}
document.body.appendChild(orderedList);
}
}
function findTheNextMostPopularOneGivenTheWeights(arrayOfArrays, weights, uniqueCandidates, winners){
let popularity = Array(uniqueCandidates.length).fill(0);
for(let i = 0; i<uniqueCandidates.length; i++){
for(let j=1; j<arrayOfArrays.length; j++){
// j = 1 because we don't want to include the title
//alert("array = "+arrayOfArrays[j]);
if(arrayOfArrays[j].includes(uniqueCandidates[i])){
popularity[i]+= 1/weights[j];
}
}
}
for(let i = 0; i<popularity.length; i++){
//alert("popularity["+uniqueCandidates[i]+"] =" +popularity[i]);
}
let maxPopularity = 0;
let winner = undefined;
//alert(popularity + "\n"+uniqueCandidates);
for(let i=0; i<uniqueCandidates.length; i++){
if(popularity[i]>=maxPopularity && !winners.includes(uniqueCandidates[i])){
// Note, this breaks a tie pretty arbitrarily
// Tie breaking mechanism: so obscure as to be random.
winner = uniqueCandidates[i];
//alert("new better:" +uniqueCandidates[i]);
maxPopularity = popularity[i];
}
}
//alert(winner);
return winner;
}
function updateWeightsGivenTheNewWinner(arrayOfArrays, weights, newWinner){
for(let i=0; i<arrayOfArrays.length; i++){
if(arrayOfArrays[i].includes(newWinner)){
weights[i] = weights[i]+1;
}
}
return weights;
}
function findUnique(arrayOfArrays){
let uniqueElements = [];
for(let i = 1; i<arrayOfArrays.length; i++){ // We start with the second row (i=1, instead of i=0, because we take the first row to be a header)
for(let j=0; j<arrayOfArrays[i].length; j++){
if(!uniqueElements.includes(arrayOfArrays[i][j])){
uniqueElements.push(arrayOfArrays[i][j]);
}
}
}
return uniqueElements;
}
});
</script>
<h1>Proportional Approval Voting MVP</h1>
<h3>What is this? How does this work?</h3>
<p>This is the simplest version of a program which computes the result of an election, under the <a href="https://www.electionscience.org/learn/electoral-system-glossary/#proportional_approval_voting" target="_blank">Proportional Approval Voting</a> method, for elections which have one or more winners (e.g., presidential elections, but also board member elections).</p>
<p>It takes a csv (comma separated value) file, with the same format as <a href="https://docs.google.com/spreadsheets/d/11pBOP6UJ8SSaHIY-s4dYwgBr4PHodh6cIXf-D4yl7HU/edit?usp=sharing" target="_blank">this one</a>, which might be produced by a Google Forms like <a href="https://docs.google.com/forms/d/1_-B5p8ePHnE1jXTGVT_kfRrMRqJuxmm8DPKn-MR1Pok/edit" target="_blank">this one.</a></p>
<p>It computes the result using client-side JavaScript, which means that all operations are run in your browser, as opposed to in a server which is not under your control. In effect, all this webpage does is provide you with a bunch of functions. In fact, you could just load this page, disconnect from the internet, upload your files, and you could still use the webpage to get the results you need.</p>
<div id="dvImportSegments" class="fileupload ">
<fieldset>
<legend>Upload your CSV File to compute the result</legend>
<label>Number of winners: </label><input type="number" id="numWinners" value="2">
<!-- This is not really aesthetic; change. -->
<br>
<input type="file" name="File Upload" id="txtFileUpload" accept=".csv" />
</fieldset>
</div>
<div id="results"></div>

@ -1 +0,0 @@
Subproject commit 4f5fa42a8214057289b30ff92ef5fe082700d59e

@ -1,8 +1,19 @@
Most of my research can be found [on the EA Forum](https://forum.effectivealtruism.org/users/nunosempere). For forecasting related research, see [forecasting](/forecasting)
For forecasting related research, see [forecasting](/forecasting)
## Current research
## Current projects
Besides forecasting (of probabilities), a major thread in my research is estimation (of values). Pieces related to this topic are:
I'm currently doing [private consulting](https://nunosempere.com/consulting/), and writting up my desillusionment with EA:
- [Some melancholy about the value of my work depending on decisions by others beyond my control](https://nunosempere.com/blog/2023/07/13/melancholy/)
- [Why are we not harder, better, faster, stronger?](https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/)
- [Brief thoughts on CEAs stewardship of the EA Forum](https://nunosempere.com/blog/2023/10/15/ea-forum-stewardship/)
- [Hurdles of using forecasting as a tool for making sense of AI progress](https://nunosempere.com/blog/2023/11/07/hurdles-forecasting-ai/)
## Past projects
_Estimation of values_
I spent a few years of my life grappling with EA (effective altruism) being nominally about doing the most good, but it not having good tools to identify and prioritize across possible interventions. Eventually I gave up when I got it through my thick head that despite my earlier hopes, there wasn't much demand for the real version of this—as opposed to the fake version of pretending to evaluate stuff, and pretending to be "impact oriented". Still, I think it's an interesting body of research.
- [Five steps for quantifying speculative interventions](https://forum.effectivealtruism.org/posts/3hH9NRqzGam65mgPG/five-steps-for-quantifying-speculative-interventions)
- [A Critical Review of Open Philanthropys Bet On Criminal Justice Reform](https://forum.effectivealtruism.org/posts/h2N9qEbvQ6RHABcae/a-critical-review-of-open-philanthropy-s-bet-on-criminal)
@ -18,37 +29,47 @@ Besides forecasting (of probabilities), a major thread in my research is estimat
- [A Bayesian Adjustment to Rethink Priorities' Welfare Range Estimates](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/)
- [Relative Impact of the First 10 EA Forum Prize Winners](https://forum.effectivealtruism.org/posts/pqphZhx2nJocGCpwc/relative-impact-of-the-first-10-ea-forum-prize-winners)
- [An estimate of the value of Metaculus questions](https://forum.effectivealtruism.org/posts/zyfeDfqRyWhamwTiL/an-estimate-of-the-value-of-metaculus-questions)
- [Relative values for animal suffering and ACE Top Charities](https://nunosempere.com/blog/2023/05/29/relative-value-animals/)
- [Some estimation work in the horizon](https://nunosempere.com/blog/2023/03/20/estimation-in-the-horizon/)
- [Estimation for sanity checks](https://nunosempere.com/blog/2023/03/10/estimation-sanity-checks/)
- [Pathways to impact for forecasting and evaluation](https://forum.effectivealtruism.org/posts/oXrTQpZyXkEbTBfB6/pathways-to-impact-for-forecasting-and-evaluation)
- [Pathways to impact for forecasting and evaluation](https://forum.effectivealtruism.org/posts/oXrTQpZyXkEbTBfB7/pathways-to-impact-for-forecasting-and-evaluation)
- [Building Blocks of Utility Maximization](https://forum.effectivealtruism.org/posts/8XWi8FBkCuKfgPLMZ/building-blocks-of-utility-maximization)
- [Brief evaluations of top-10 billionnaires](https://nunosempere.com/blog/2022/10/21/brief-evaluations-of-top-10-billionnaires/)
- [Use of "I'd bet" on the EA Forum is mostly metaphorical](https://nunosempere.com/blog/2023/03/02/metaphorical-bets/)
- [Things you should buy, quantified](https://nunosempere.com/blog/2023/04/06/things-you-should-buy-quantified/)
- [Peoples choices determine a partial ordering over peoples desirability](https://nunosempere.com/blog/2023/06/17/ordering-romance/)
- [Impact markets as a mechanism for not loosing your edge](https://nunosempere.com/blog/2023/02/07/impact-markets-sharpen-your-edge/)
- [Updating in the face of anthropic effects is possible](https://nunosempere.com/blog/2023/05/11/updating-under-anthropic-effects/)
_Red teaming_
Relatedly, I have an interest in _red teaming_,
If you are trying to be as effective as you can, wouldn't you like someone to point out where you might be going wrong? Not so!
- The [NegativeNuno](https://forum.effectivealtruism.org/users/negativenuno) EA Forum account covers negative criticism that I'm uncertain about.
- A past piece on this topic is [Frank Feedback Given To Very Junior Researchers](https://forum.effectivealtruism.org/posts/7utb4Fc9aPvM6SAEo/frank-feedback-given-to-very-junior-researchers)
## Past projects
In the past, I've been much more eclectic, and explored a variety of topics, such as:
- [A Bayesian Adjustment to Rethink Priorities' Welfare Range Estimates](https://nunosempere.com/blog/2023/02/19/bayesian-adjustment-to-rethink-priorities-welfare-range-estimates/)
- [A flaw in a simple version of worldview diversification](https://nunosempere.com/blog/2023/04/25/worldview-diversification/)
- [Review of Epochs Scaling transformative autoregressive models](https://nunosempere.com/blog/2023/04/28/expert-review-epoch-direct-approach/)
_Shapley values_:
If you care about doing good together, you should care about how to coordinate on who does which projects. Shapley values solve this, as does adjusting the counterfactual value to be more like Shapley values. I named my consultancy after this concept.
- [Shapley Values: Better Than Counterfactuals](https://forum.effectivealtruism.org/posts/XHZJ9i7QBtAJZ6byW/shapley-values-better-than-counterfactuals)
- [Shapley Values and Philanthropic Coordination Theory](https://forum.effectivealtruism.org/posts/3NYDwGvDbhwenpDHb/shapley-values-reloaded-philantropic-coordination-theory-and)
- [A Shapley Value Calculator](http://shapleyvalue.com/)
_Economic models of social movement growth_:
You can make progress on gnarly economics questions by [throwing](https://github.com/NunoSempere/ReverseShooting/tree/master) compute at [it](https://github.com/NunoSempere/LaborCapitalAndTheOptimalGrowthOfSocialMovements/tree/master). Unfortunately, reality is complicated enough that these models won't capture all the assumptions you care about, and so might not be all that informative in real life.
- [A Model of Patient Spending and Movement Building](https://forum.effectivealtruism.org/posts/FXPaccMDPaEZNyyre/a-model-of-patient-spending-and-movement-building)
- [Labor, Capital, and the Optimal Growth of Social Movements](https://nunosempere.github.io/ea/MovementBuildingForUtilityMaximizers.pdf)
_Categorization of new causes_:
Let's just create some obvious infrastructure to track suggestions that people make!
- [Big List of Cause Candidates](https://forum.effectivealtruism.org/posts/SCqRu6shoa8ySvRAa/big-list-of-cause-candidates)
- [A Funnel for Cause Candidates](https://forum.effectivealtruism.org/posts/iRA4Dd2bfX9nukSo3/a-funnel-for-cause-candidates)
- [International Supply Chain Accountability](https://forum.effectivealtruism.org/posts/ME4zE34KBSYnt6hGp/new-top-ea-cause-international-supply-chain-accountability)
@ -56,11 +77,16 @@ _Categorization of new causes_:
_Technological discontinuities_:
People are talking about "technological discontinuities". How often do they happen?
- [A prior for technological discontinuities](https://www.lesswrong.com/posts/FaCqw2x59ZFhMXJr9/a-prior-for-technological-discontinuities)
- [Discontinuous trends in technological progress](https://nunosempere.github.io/rat/Discontinuous-Progress.html)
_AI-related_
Is this AI thing going to doom us all?
- [Hurdles of using forecasting as a tool for making sense of AI progress](https://nunosempere.com/blog/2023/11/07/hurdles-forecasting-ai/)
- [My highly personal skepticism braindump on existential risk from artificial intelligence](https://nunosempere.com/blog/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk/)
- [A concern about the "evolutionary anchor" of Ajeya Cotra's report](https://nunosempere.com/blog/2022/08/10/evolutionary-anchor/)
- [There will always be a Voigt-Kampff test](https://nunosempere.com/blog/2023/01/21/there-will-always-be-a-voigt-kampff-test/)

@ -1,16 +1,21 @@
Most of my software projects can be seen in [my github](https://github.com/NunoSempere/), or on the github of the [Quantified Uncertainty Research Institute](https://github.com/QURIresearch). In recent times, I've been working on [Metaforecast](https://metaforecast.org/), a forecast aggregator, and on [Squiggle](https://www.squiggle-language.com/), a small programming language for estimation.
Many of my software projects can be seen in [my github](https://github.com/NunoSempere/), or on the github of the [Quantified Uncertainty Research Institute](https://github.com/QURIresearch), where I working on [Metaforecast](https://metaforecast.org/), a forecast aggregator, and on [Squiggle](https://www.squiggle-language.com/), a small programming language for estimation. Currently, I'm also hosting my stuff at [git.nunosempere.com](https://git.nunosempere.com/)
I'm generally excited about Linux development, privacy preserving tools, open source projects, and more generally, software which gives power to the user.
Some miscellaneous programming projects:
- [Webpages I am making available to my corner of the internet](https://nunosempere.com/blog/2023/08/14/software-i-am-hosting/)
- [A Soothing Frontend for the Effective Altruism Forum](https://nunosempere.com/blog/2023/04/18/forum-frontend/)
- [wc: count words in <50 lines of C](https://git.nunosempere.com/personal/wc)
- [Time to botec](https://git.nunosempere.com/personal/time-to-botec): Simple Fermi estimation scripts which do the same in different programming languages.
- [squiggle.c](https://git.nunosempere.com/personal/squiggle.c): A grug-brained, self-contained C99 library that provides a subset of Squiggle's functionality in C.
- [Find a beta distribution that fits your desired confidence interval](https://nunosempere.com/blog/2023/03/15/fit-beta/)
- [Longnow](https://github.com/NunoSempere/longNowForMd): A tool for adding (a) for archive.org links to markdown files
- [Labeling](https://github.com/NunoSempere/labeling): An R package which I mantain. It's used in ggplot2, through the scales package, and thus has 500k+ downloads a month.
- [Predict, resolve and tally](https://github.com/NunoSempere/PredictResolveTally): A small bash utility for making predictions.
- [Q](https://blogdelecturadenuno.blogspot.com/2020/12/q-un-programa-para-escribir-y-analizar-poemas-y-poesia.html): A program for analyzing Spanish poetry.
- [Rosebud](https://github.com/NunoSempere/rose-browser), my [personal fork](https://nunosempere.com/blog/2022/12/20/hacking-on-rose/) of [rose](https://github.com/mini-rose/rose), which is a simple browser written in C. I've been using this as my mainline browser for a bit now, and enjoy the simplicity.
- [Simple Squiggle](https://github.com/quantified-uncertainty/simple-squiggle), a restricted subset of Squiggle syntax useful for multiplying and dividing lognormal distributions analytically.
- [Time to BOTEC](https://github.com/NunoSempere/time-to-botec): doing simple Fermi estimation in various different programming languages, so far C, R, python, javascript and squiggle.
- [Nuño's stupid node version manager](https://github.com/NunoSempere/nsnvm): Because nvm noticeably slowed down bash startup time, and 20 line of bash can do the job.
- [Werc tweaks](https://github.com/NunoSempere/werc-1.5.0-tweaks). I like the idea behind [werc](https://werc.cat-v.org/), and I've tweaked it a bit when hosting this website
- [German pronoun](https://github.com/NunoSempere/german_pronoun), a small bash script to get the correct gender for german nouns

Loading…
Cancel
Save