Securing Humanity’s Future: Reorienting U.S. Policy to Address Existential Risks
By Bryce Farabaugh
Humanity in the 21st century is poised to face an unprecedented number of risks that previous generations did not have to face, and its ability to address these risks could have enormous consequences for future generations. Not only will there likely be a greater number of risks, but a small number of these risks will have much more significance for the future of humanity than those experienced in the past. The most consequential of these concerns may be existential risks, or risks that threaten the very survival of humanity itself. These risks include threats from climate change, nuclear weapons, advanced artificial intelligence (AI), and others that could decimate human populations, lead to civilizational collapse, render earth inhospitable for human life, or otherwise lead to immense suffering and the foreclosure of humanity’s flourishing. Mitigating the threats posed by these risks could have an outsized impact on future generations, suggesting that policies that address them would likewise have an outsized impact compared to other policy decisions.
Although there is an increasingly diverse academic and non-governmental community interested in acknowledging and mitigating existential risks, the U.S. federal government has been relatively quiet on the subject. As of June 2021, the U.S. government has yet to issue a public, unclassified, comprehensive report that addresses efforts to research and address existential risks. This is likely due to a combination of many factors: electoral political dynamics, bureaucratic barriers, economic considerations, undervaluing long-term threats while inflating short-term threats, etc. Despite these barriers, the United States is in a unique position to confront and ultimately mitigate these threats: currently a leader in the international system based on economic, diplomatic, technological, and military indicators, a good-faith effort by the United States to unilaterally commit to understanding and reducing existential risks— in addition to strengthening international efforts to do so— could be extremely cost-effective for humanity in the long run.[1]
What are Existential Risks and Why Should U.S. Policymakers Care?
Existential risks are a reality for every species on earth, but only in the past century have humans been able to create (and potentially avert) so-called anthropogenic risks that threaten both themselves and other species’ continued existence. According to Nick Bostrom— a philosophy professor at Oxford University and one of the first academics to call for systematically studying risks to humanity— an existential risk is “one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential”.[1] Such risks can be differentiated from other events that may be immensely consequential but fall short of the threshold of causing the extinction of humanity or permanently curtailing its potential: for example, Bostrom defines so-called global catastrophic risks as “one with the potential to wreak death and destruction on a global scale”.[2] Whereas global catastrophic risks would create substantial suffering on an unprecedented level, they would not necessarily cause humanity’s extinction nor permanently foreclose its future, suggesting efforts should prioritize mitigating existential risks over global catastrophic risks.
A key component of anthropogenic existential risks is the development of advanced technologies that threaten humanity if meaningful controls are not put in place. For example, some argue the beginning of the anthropocene (or the geological period “characterized as the time in which the collective activities of human beings (Homo sapiens) began to substantially alter Earth’s surface, atmosphere, oceans, and systems of nutrient cycling”) should officially be recognized as July 16, 1945, the date humans detonated the first nuclear device at the Trinity Test in New Mexico.[3] By 1945, scientists were already coming to terms with the immense power they were developing and the responsibilities associated with it: some Manhattan Project scientists were so concerned that detonating a nuclear weapon could accidentally ignite the atmosphere and destroy the world that they ran calculations to determine that such an outcome was “extremely unlikely”.[4]
Despite these concerns about nuclear weapons, Manhattan Project scientists felt confident (at the time) that the benefits of developing such a technology outweighed the dangers they posed, but future existential risks from technologies could potentially develop by accident as well. Bostrom illustrates this threat by describing human history as a process of pulling balls out of an urn at random: a white ball represents a beneficial idea or discovery, while a grey ball represents one that can be both beneficial or dangerous depending on its use (nuclear energy, for example).[5] Pulling a black ball out of this symbolic urn would represent a technology or invention that brings about the end of the civilization that developed it— an occurrence that has yet to happen, but, given enough time and the fact that technologies can’t be “uninvented”, means the forward march of time constantly brings humanity closer to such a risk.[6]
This threat posed by research and development of new technologies may not result in a symbolic “black ball” for decades or even centuries, however, which means there’s still time for governments around the world to develop and implement policies aimed at mitigating such risks. While such an effort may be the responsibility of every individual state in the international system, researching and crafting policies to address existential risks are particularly important for the U.S. federal government for several reasons. First, the United States has a unique responsibility as arguably the most important actor in contributing to (and mitigating) existential risks: for example, as the first nuclear weapons state, the United States was present at the birth of the “anthropocene” and continues to account for a significant percentage of greenhouse gas emissions that cause climate change.[7] Second, given its unique position in the international system based on economic, diplomatic, technological, and military indicators, the United States continues to hold a leadership role in intergovernmental organizations like the United Nations, World Bank, International Monetary Fund, and others and can exert its influence to form coalitions that are better equipped to address the collective action problems presented by many existential risks. Finally, the unique position the United States finds itself in (as both the stakeholder with arguably the biggest responsibility to reduce existential risks as well as the best equipped to do so) means any efforts taken by the U.S. government to address the problems posed by existential risks will have an outsized impact on future generations, making the issue particularly salient for U.S. policymakers.
Past U.S. Policy Efforts to Research and Address Existential Risks
While there has yet to be a comprehensive, unclassified report that systematically details U.S. government efforts to understand and reduce existential risks, there have been numerous reports, hearings, committees, reviews, and recommendations by various government organizations to explore individual issues that pose threats to humanity’s future. To understand the strengths and weaknesses of different approaches, it’s useful to explore three of the most prominent of such policy products issued by the U.S. federal government in recent years, each addressing a different existential risk: climate change, nuclear weapons, and artificial intelligence (AI). First, The National Climate Assessment is the largest and most noteworthy recent example of the U.S. government’s approach to understanding risks associated with anthropogenic climate change. Second, the Nuclear Posture Review provides the most comprehensive review about the role nuclear weapons play in U.S. national security strategy for both allies and adversaries alike. Third, the final report by the National Security Commission on Artificial Intelligence details the commission’s findings on how AI could impact U.S. national security in the years and decades to come.
Fourth National Climate Assessment
The Fourth National Climate Assessment (NCA4)— issued in two volumes, with the second released in November 2018— is the most extensive U.S. government analysis on the scientific evidence for climate change as well as its assessment of climate change’s likely impact on Americans in the years to come. According to the report, the National Climate Assessment “was written to help inform decision-makers, utility and natural resource managers, public health officials, emergency planners, and other stakeholders by providing a thorough examination of the effects of climate change on the United States.”[8] The assessment is a product of the Global Change Research Act of 1990, a law passed by Congress mandating the U.S. Global Change Research Program (USGCRP) deliver a report no less than every 4 years that
“1) integrates, evaluates, and interprets the findings of the Program…; 2) analyzes the effects of global change on the natural environment, agriculture, energy production and use, land and water resources, transportation, human health and welfare, human social systems, and biological diversity; and 3) analyzes current trends in global change, both human-induced and natural, and projects major trends for the subsequent 25 to 100 years.”[9]
Spearheaded by the National Oceanic and Atmospheric Association (NOAA), the assessment shows how climate change is likely the most-researched and best-understood existential risk confronting policymakers today. This is likely due to the considerable scientific evidence climate scientists have been collecting for decades as well as the number of researchers working on the issue: for the National Climate Assessment alone, a team of more than 300 experts from both government and non-government backgrounds worked to produce the report, which subsequently underwent rigorous review processes by both the NCA4 Federal Steering Committee as well as the 13 federal agencies that comprise the USGCRP.[10]
The scientific rigor and analytical relevance of the NCA4 is above reproach, but every policy product has its limitations, and several takeaways can inform policymaking as it relates to existential risks moving forward. While the non-partisan and scientific makeup of the NCA4 authors insulated the group’s findings from political pressure, the report’s release was nevertheless marred by accusations of attempted political interference as the Trump administration (charged with releasing and publicizing the assessment) downplayed the findings and tried to bury the news by releasing the findings the day after Thanksgiving.[11] Furthermore, the congressional mandate requiring an assessment every 4 years guarantees periodic reassessments of the U.S. government’s understanding of risks associated with climate change, but such infrequent publishing cycles limits the USGCRP’s ability to consistently inform the American public about the latest climate change research.
2018 Nuclear Posture Review
The 2018 Nuclear Posture Review (NPR) is the most recent version of the U.S. government’s semi-regular unclassified publication detailing the role nuclear weapons play in U.S. national security strategy. The first NPR was announced in the 1990’s as “the first [Department of Defense] study of its kind to incorporate reviews of policy, doctrine, force structure, operations, safety and security, and arms control in one look.”[12] Since then, the NPR has evolved into an important policy product that seeks to message to both allies and adversaries the scenarios in which the U.S. president would consider using nuclear weapons in a crisis or conflict, the status of arms control and nonproliferation efforts around the world, developments in the U.S. nuclear arsenal, and other details about U.S. nuclear doctrine.
The president of the United States has the authority to direct the Department of Defense to deliver a Nuclear Posture Review, which often happens near the beginning of a new administration. While these reports are usually evolutionary rather than revolutionary (reflecting continuity in both the U.S. military and civilian leadership) they nevertheless can detail important changes in U.S. nuclear doctrine. For example, the 2018 NPR highlighted how the United States perceives the threat environment, with particular attention paid to great power competitors China and Russia.[13] Additionally, seemingly-minor details can signal significant shifts in policy: for example, in the 2018 NPR, a passage states
“The United States would only consider the employment of nuclear weapons in extreme circumstances to defend the vital interests of the United States, its allies, and partners. Extreme circumstances could include significant non-nuclear strategic attacks. Significant non-nuclear strategic attacks include, but are not limited to, attacks on the U.S., allied, or partner civilian population or infrastructure, and attacks on U.S. or allied nuclear forces, their command and control, or warning and attack assessment capabilities.”[14]
This passage is particularly relevant for nuclear risks related to inadvertent escalation, as some scholars have noted how the so-called “entanglement” of nuclear and non-nuclear systems could result in a crisis or conflict spiraling out of control, leading to the use of nuclear weapons.[15]
Although the 2018 NPR provides considerable transparency into U.S. nuclear doctrine in a way publications from other nuclear-armed states do not, it fails to account for many of the considerations one would hope for when assessing U.S. policy vis a vis existential risks. Most significantly, the NPR is fundamentally a U.S. national security document, emphasizing U.S. efforts to deter adversaries rather than efforts to reduce nuclear risks or prevent nuclear proliferation. Although the executive summary highlights “the long-term goal of eliminating nuclear weapons”, that goal is juxtaposed against “the requirement that the United States have modern, flexible, and resilient nuclear capabilities that are safe and secure until such a time as nuclear weapons can prudently be eliminated from the world”, indicating such language about disarmament is primarily symbolic rather than substantive.[16] Additionally, the infrequent publication of the NPR is less than ideal: as the primary document outlining official U.S. nuclear policy, the NPR is critically important for both non-governmental experts as well as the American public, but publication roughly every 4 to 8 years is largely inadequate given increasing advances in both conventional and nuclear weapons technologies.[17]
Final Report of the National Security Commission on Artificial Intelligence
The final report of the National Security Commission on Artificial Intelligence (NSCAI) represents a third policy product distinct from those described above (namely, the congressionally-mandated and science-focused quadrennial National Climate Assessment as well as the semi-regular, executive branch directed, and military-focused Nuclear Posture Review). Congress established the NSCAI as an independent national security commission in the fiscal year 2019 National Defense Authorization Act (NDAA) “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.”[18] Given the concentration of artificial intelligence (AI) expertise in the private sector, the NSCAI final report also differs from the two previous policy products that relied primarily on experts in government and instead emphasizes its holistic approach by working with representatives from the private sector as well as academia, civil society, and the government.
In addition to its congressional mandate, the commission’s unique structure created several notable strengths as it researched the nexus of AI and U.S. national security. The 15 commissioners were nominated by both Congress as well as the executive branch and were leaders in industry (e.g. Eric Schmidt, former CEO of Google) as well as from government (e.g. Robert Work, former Deputy Secretary of Defense). In addition to the commissioners, the organization itself consisted of several dozen researchers and analysts and, with such a wide variety of experts from different backgrounds, gave the commission considerable cachet on both sides of the political aisle. The time-limited nature of such a commission also lent a sense of urgency to the organization’s findings: although the commission issued its final report in March 2021, it submitted quarterly recommendations as well as interim reports to Congress detailing its preliminary findings.
Despite these strengths, the NSCAI final report was limited in ways other policy products generally are not. While the time-limited nature of a national security commission does engender a sense of urgency in its fact-finding and analysis, upon issuing its final report, the commission is dissolved, thereby removing the organization’s ability to further inform Congress and the American public. The limited scope of such a commission can also blunt the utility of its findings: whereas the National Climate Assessment is able to explore social, economic, health, and environmental impacts of climate change (among others), the NSCAI’s focus on the national security implications of AI meant it was unable to explore other developments that are not directly related to U.S. national security interests.
The focused mandate of the NSCAI also limited its direct applicability to existential risk issues. Although the final report mentions several considerations that may be relevant for issues of humanity’s long-term survival as they relate to advanced artificial intelligence (for example, generally mentioning issues of aligned goals and values), it fails to adequately address such concerns given its focus on short and medium-term national security concerns. On the other hand, the final report does recommend the U.S. government “clearly and publicly affirm existing U.S. policy that only human beings can authorize employment of nuclear weapons, and seek similar commitments from Russia and China”, a relevant issue given concerns about nuclear escalation and accidental launch.[19]
Signs of Change? Global Trends 2040
The three policy products explored above (the Fourth National Climate Assessment, the 2018 Nuclear Posture Review, and the final report of the National Security Commission on Artificial Intelligence) are each imperfect examples of how the U.S. government has researched and addressed existential risk issues in the past, but there may be some small signs of change moving forward. Published in March 2021 by the National Intelligence Council (NIC), Global Trends 2040 is the most recent edition of the quadrennial report meant to “[assess] the key trends and uncertainties that will shape the strategic environment for the United States during the next two decades.”[20] This newest edition includes a single paragraph on existential risks, beginning by stating “technological advances may increase the number of existential threats; threats that could damage life on a global scale challenge our ability to imagine and comprehend their potential scope and scale, and they require the development of resilient strategies to survive.”[21] While this paragraph is only a few sentences long and is practically a footnote in the 100+ page report, it nevertheless may signal an important change within the U.S. government as the intelligence community recognizes the fact that “low-probability, high-impact events are difficult to forecast and expensive to prepare for but identifying potential risks and developing mitigation strategies in advance can provide some resilience to exogenous shocks.”[22]
Conclusion and Policy Recommendations
The United States government is in a unique position as it relates to understanding and preparing for threats to humanity’s long-term survival and flourishing. As arguably the state with the most responsibility to address existential risks in the international system—as well as the state with the most capacity to do so— the United States is well-positioned to significantly increase its contributions to the growing academic and non-governmental community working to safeguard humanity’s security in the 21st century and beyond.
Drawing on an analysis of the U.S. government’s previous approaches to existential risks, several potential avenues exist for how the United States can better address such risks in the future.
First, the United States Congress should mandate an independent commission to systematically research and report on U.S. efforts to mitigate existential risks (similar to the National Security Commission on Artificial Intelligence and the United Kingdom’s Blackett Review of High Impact Low Probability Risks in 2012).[23] Such a commission would garner significant bipartisan support, be able to review classified materials, and issue a report to both Congress and the American public to increase awareness of existential risk issues.
Second, The Department of Defense should legally assign a role (such as the undersecretary of defense for policy) or establish a new position tasked with leading departmental efforts to reduce existential risks. Having a single role responsible for such efforts would allow the department to delegate resources to staff to research existential risks, formulate policy options, and signal the seriousness with which the U.S. government considers existential risks. This position would ideally be located in the Department of Defense given its involvement in a number of existential risk issues (including nuclear weapons policy as well as research and development on advanced military technologies) but could alternatively exist in the intelligence community (given its role in assessing risks and forecasting threats.)
Finally, efforts should be made within U.S. government research and development organizations to implement tabletop gaming, horizon scanning/ strategic foresight methodology, and probabilistic forecasting to better predict and prepare for high impact/ low probability technological risks. These organizations (including the Defense Advanced Research Projects Agency (DARPA), Intelligence Advanced Research Projects Activity (IARPA), and others) work extensively at the intersection of emerging technologies and efforts to predict U.S. national security threats and are well-positioned to incorporate existential risk reduction into their missions. Innovative methods like tabletop gaming, horizon scanning, and probabilistic forecasting (among others) have been shown to be effective at generating both more accurate predictions and more creative thinking when analysts consider uncertain future scenarios.[24] Implementing such methods into these organizations could help the U.S. government better prepare for technological risks that traditional research and analysis might miss.
Taken together, these policy recommendations would help the U.S. government better understand, predict, and potentially reduce existential risks, and could help ensure humanity’s continued survival and flourishing for many years to come.
[1] This white paper is the final project for the Spring 2021 course SOCI 30531, “Are We Doomed?”, at the University of Chicago. For more info about this white paper, see here.
[1] Bostrom, Nick. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” 2002.
[2] Bostrom, Nick. Global Catastrophic Risks. Oxford University Press. Accessed June 2, 2021. https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198570509.001.0001/isbn-9780198570509.
[3] Encyclopedia Britannica. “Anthropocene Epoch | Definition & Evidence.” Accessed June 2, 2021. https://www.britannica.com/science/Anthropocene-Epoch.
[4] “The Man Who Feared, Rationally, That He’d Just Destroyed the World.” Washington Post. Accessed June 2, 2021. https://www.washingtonpost.com/news/achenblog/wp/2015/07/23/the-man-who-feared-rationally-that-hed-just-destroyed-the-world/.
[5] Khatchadourian, Raffi. “The Doomsday Invention.” The New Yorker. Accessed June 2, 2021. https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom.
[6] Ibid
[7] US EPA, OAR. “Global Greenhouse Gas Emissions Data.” Overviews and Factsheets. US EPA, January 12, 2016. https://www.epa.gov/ghgemissions/global-greenhouse-gas-emissions-data.
[8] USGCRP, 2018: Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, Volume II [Reidmiller, D.R., C.W. Avery, D.R. Easterling, K.E. Kunkel, K.L.M. Lewis, T.K. Maycock, and B.C. Stewart (eds.)]. U.S. Global Change Research Program, Washington, DC, USA, 1515 pp. doi: 10.7930/NCA4.2018.
[9] Global Change Research Act of 1990. Pub. L. No. 101-606, 104 Stat 3096–3104, November 16 1990.
[10] USGCRP. “Fourth National Climate Assessment.” U.S. Global Change Research Program, Washington, DC, 2018. https://nca2018.globalchange.govhttps://nca2018.globalchange.gov/chapter/front-matter-about.
[11] Davenport, Coral, and Kendra Pierre-Louis. “U.S. Climate Report Warns of Damaged Environment and Shrinking Economy.” The New York Times, November 23, 2018, sec. Climate. https://www.nytimes.com/2018/11/23/climate/us-climate-report.html.
[12] “The Nuclear Information Project: 1994 Nuclear Posture Review.” Accessed June 3, 2021. https://www.nukestrat.com/us/reviews/npr1994.htm.
[13] U.S. Department of Defense. “Special Report: Nuclear Posture Review – 2018.” Accessed June 3, 2021. https://www.defense.gov/News/Special-Reports/NPR-2018.
[14] Ibid
[15] Acton, James M. “Escalation through Entanglement: How the Vulnerability of Command-and-Control Systems Raises the Risks of an Inadvertent Nuclear War.” International Security 43, no. 1 (August 2018): 56–99. https://doi.org/10.1162/isec_a_00320.
[16] 2018 Nuclear Posture Review
[17] There are indications the Biden administration is currently preparing a new Nuclear Posture Review, to be delivered sometime in 2021 or later. For more, see: POLITICO. “Biden Goes ‘Full Steam Ahead’ on Trump’s Nuclear Expansion despite Campaign Rhetoric.” Accessed June 4, 2021. https://www.politico.com/news/2021/06/02/biden-trump-nuclear-weapons-491631.
[18] NSCAI. “About.” Accessed June 4, 2021. https://www.nscai.gov/about/.
[19] “NSCAI Final Report – Table of Contents.” Accessed June 4, 2021. https://reports.nscai.gov/final-report/table-of-contents/.
[20] “Office of the Director of National Intelligence – Global Trends.” Accessed June 4, 2021. https://www.dni.gov/index.php/global-trends-home.
[21] Ibid
[22] Ibid
[23] GOV.UK. “High Impact Low Probability Risks: Blackett Review.” Accessed June 4, 2021. https://www.gov.uk/government/publications/high-impact-low-probability-risks-blackett-review.
[24] PenguinRandomhouse.com. “Superforecasting by Philip E. Tetlock, Dan Gardner: 9780804136716 | PenguinRandomHouse.Com: Books.” Accessed June 4, 2021. https://www.penguinrandomhouse.com/books/227815/superforecasting-by-philip-e-tetlock-and-dan-gardner/.