longNowForMd/example/example.md.links
2021-06-29 12:00:04 +02:00

135 lines
9.5 KiB
Plaintext

https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations
https://forum.effectivealtruism.org/posts/Ps8ecFPBzSrkLC6ip/2018-2019-long-term-future-fund-grantees-how-did-they-do
https://forum.effectivealtruism.org/posts/pqphZhx2nJocGCpwc/relative-impact-of-the-first-10-ea-forum-prize-winners
https://forum.effectivealtruism.org/posts/pqphZhx2nJocGCpwc/relative-impact-of-the-first-10-ea-forum-prize-winners?commentId=5xujn5KiLmgEaXaYt
https://forum.effectivealtruism.org/users/larks
https://forum.effectivealtruism.org/posts/29mfRszEcpn6uLZAb/allfed-2020-highlights
https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates
https://forum.effectivealtruism.org/users/luisa_rodriguez
https://www.getguesstimate.com/models/11762
https://forum.effectivealtruism.org/posts/CcNY4MrT5QstNh4r7/cost-effectiveness-of-foods-for-global-catastrophes-even
https://i.imgur.com/11Dq64a.png
https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad
https://www.getguesstimate.com/models/18201
https://i.imgur.com/aUaqPd4.png
https://allfed.info/
https://forum.effectivealtruism.org/tag/allfed
https://allfed.info/team-members/
https://forum.effectivealtruism.org/posts/AWKk9zjA3BXGmFdQG/appg-on-future-generations-impact-report-raising-the-profile-1
https://www.appgfuturegenerations.com/officers-and-members
https://forum.effectivealtruism.org/posts/AWKk9zjA3BXGmFdQG/appg-on-future-generations-impact-report-raising-the-profile-1#Strategy_and_aims
https://en.wikipedia.org/wiki/Peter_principle
https://www.ign.org/
https://i.imgur.com/vIaYxnt.png
https://forum.effectivealtruism.org/tag/all-party-parliamentary-group-for-future-generations
https://www.appgfuturegenerations.com/
https://www.cser.ac.uk/team
https://i.imgur.com/l47LXUD.png
https://www.cser.ac.uk/
https://www.cser.ac.uk/team/
https://cset.georgetown.edu/publication/maintaining-the-ai-chip-competitive-advantage-of-the-united-states-and-its-allies/
https://cset.georgetown.edu/publication/cset-testimony-before-senate-banking-committee/
https://cset.georgetown.edu/publication/cset-testimony-before-house-science-committee/
https://cset.georgetown.edu/publication/cset-testimony-before-house-homeland-security-committee/
https://cset.georgetown.edu/publication/chinas-current-capabilities-policies-and-industrial-ecosystem-in-ai/
https://cset.georgetown.edu/publication/technology-trade-and-military-civil-fusion-chinas-pursuit-of-artificial-intelligence/
https://cset.georgetown.edu/publication/testimony-before-senate-foreign-relations-committee/
https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology
https://i.imgur.com/IHSQ716.png
https://cset.georgetown.edu/publications/
https://cset.georgetown.edu/publication/cset-reading-guide/
https://cset.georgetown.edu/publication/cset-publishes-ai-policy-recommendations-for-the-next-administration/
https://cset.georgetown.edu/publication/keeping-top-ai-talent-in-the-united-states/
https://cset.georgetown.edu/publication/strengthening-the-u-s-ai-workforce/
https://cset.georgetown.edu/publication/future-indices/
https://cset.georgetown.edu/team/
https://cset.georgetown.edu/article/cset-experts-in-the-news
https://cset.georgetown.edu/article/cset-experts-in-the-news-10/
https://cset.georgetown.edu/publications/?fwp_content_type=translation
https://www.schneier.com/blog/archives/2021/05/ais-and-fake-comments.html
https://www.schneier.com/blog/archives/2021/06/the-future-of-machine-learning-and-cybersecurity.html
https://en.wikipedia.org/wiki/Bruce_Schneier
https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support
https://cset.georgetown.edu/article/cset-experts-in-the-news-10
https://www.wikiwand.com/en/Asilomar_Conference_on_Beneficial_AI
https://futureoflife.org/ai-principles/
https://futureoflife.org/lethal-autonomous-weapons-systems/
https://futureoflife.org/future-of-life-award/
https://futureoflife.org/policy-work
https://futureoflife.org/2018/07/25/2-million-donated-to-keep-artificial-general-intelligence-beneficial-and-robust/
https://futureoflife.org/fli-announces-grants-program-for-existential-risk-reduction/
https://www.fhi.ox.ac.uk/govai/govai-2020-annual-report
https://www.youtube.com/watch?v=HipTO_7mUOw
https://forum.effectivealtruism.org/posts/6cyXwsAanTmhvZRRH/seth-baum-reconciling-international-security
https://futureoflife.org/team/
https://i.imgur.com/CqAwEHZ.png
https://www.lesswrong.com/posts/8rYxw9xZfwy86jkpG/on-the-importance-of-less-wrong-or-another-single#PNCWPyvLS7G6L3iHW
https://www.lesswrong.com/posts/bJ2haLkcGeLtTWaD5/welcome-to-lesswrong
https://www.lesswrong.com/allPosts?filter=curated&sortedBy=new&timeframe=allTime
https://www.lesswrong.com/posts/a7jnbtoKFyvu5qfkd/formal-inner-alignment-prospectus
https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story
https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic
https://www.lesswrong.com/posts/DkcdXsP56g9kXyBdq/coherence-arguments-imply-a-force-for-goal-directed-behavior
https://www.lesswrong.com/posts/EF5M6CmKRd6qZk27Z/my-research-methodology
https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models
https://i.imgur.com/Y4gtXDO.png
https://i.imgur.com/3F1GXmL.png
https://i.imgur.com/sPA5IAZ.png
https://i.imgur.com/LdSsgeo.png
https://www.lesswrong.com/s/uNdbAXtGdJ8wZWeNs/p/3yqf6zJSwBF34Zbys
https://www.fhi.ox.ac.uk/publications/
https://www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team
https://forum.effectivealtruism.org/
https://www.alignmentforum.org/
https://i.imgur.com/7vOL4tw.png
https://www.rethinkpriorities.org/our-team
https://forum.effectivealtruism.org/users/linch
https://forum.effectivealtruism.org/users/michaela
https://forum.effectivealtruism.org/tag/rethink-priorities?sortedBy=new
https://i.imgur.com/n5BTzEo.png
https://forum.effectivealtruism.org/tag/rethink-priorities
https://www.simoninstitute.ch/
https://forum.effectivealtruism.org/posts/eKn7TDxMSSsoHhcap/introducing-the-simon-institute-for-longterm-governance-si
https://docs.google.com/document/d/1rWfQ3Lja2kYoUm_t9uNqBgEn5nz6KL8fmNP5db8cZRU/edit#
https://en.wikipedia.org/wiki/Streetlight_effect
https://i.imgur.com/QKsqX2a.png
https://80000hours.org/2021/05/80000-hours-annual-review-nov-2020/
https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison#CLR__The_Center_on_Long_Term_Risk
https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1#Multi_agent_reinforcement_learning__MARL_
https://www.alignmentforum.org/s/p947tK8CoBbdpPtyK
https://forum.effectivealtruism.org/posts/93o6JwmdPPPuTXbYv/center-on-long-term-risk-2021-plans-and-2020-review#Evaluation
https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors
https://i.imgur.com/JGvyiBf.png
https://www.lesswrong.com/users/daniel-kokotajlo
https://forum.effectivealtruism.org/posts/jxDskwWLDta7L5a8y/my-experience-as-a-clr-grantee-and-visiting-researcher-at
https://forum.effectivealtruism.org/posts/93o6JwmdPPPuTXbYv/center-on-long-term-risk-2021-plans-and-2020-review
https://www.alignmentforum.org/posts/EzoCZjTdWTMgacKGS/clr-s-recent-work-on-multi-agent-systems
https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1#Multi_agent_reinforcement_learning__MARL_-%20https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors
https://www.fhi.ox.ac.uk/the-team/
https://www.fhi.ox.ac.uk/quarterly-update-winter-2020/
https://www.fhi.ox.ac.uk/news/
https://docs.google.com/document/d/1rWfQ3Lja2kYoUm_t9uNqBgEn5nz6KL8fmNP5db8cZRU/edit
https://i.imgur.com/SiIOV6t.png
https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison#FHI__The_Future_of_Humanity_Institute
https://www.fhi.ox.ac.uk/team/lewis-gregory/
https://www.fhi.ox.ac.uk/team/cassidy-nelson/
https://www.fhi.ox.ac.uk/team/piers-millett/
https://forum.effectivealtruism.org/users/gregory_lewis
https://www.fhi.ox.ac.uk/govai/govai-2020-annual-report/
https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact
https://forum.effectivealtruism.org/posts/e8CXMz3PZqSir4uaX/what-fhi-s-research-scholars-programme-is-like-views-from-1
https://www.fhi.ox.ac.uk/dphils/
https://forum.effectivealtruism.org/posts/EPGdwe6vsCY7A9HPa/review-of-fhi-s-summer-research-fellowship-2020
https://www.fhi.ox.ac.uk/the-team
https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support
https://globalprioritiesinstitute.org/global-priorities-institute-annual-report-2019-20/
https://forum.effectivealtruism.org/posts/8vfadjWWMDaZsqghq/long-term-investment-fund-at-founders-pledge
https://globalprioritiesinstitute.org/papers/
https://globalprioritiesinstitute.org/research-agenda-web-version/
https://globalprioritiesinstitute.org/papers
https://www.longtermresilience.org/
https://www.lesswrong.com/posts/jyRbMGimunhXGPxk7/database-of-existential-risk-estimates
https://www.wikiwand.com/en/Manhattan_Project
https://www.wikiwand.com/en/Lockheed_Martin_F-35_Lightning_II_development
https://globalprioritiesinstitute.org/christian-tarsney-exceeding-expectations-stochastic-dominance-as-a-general-decision-theory/