BPRO 25800 (Spring 2021/Winter 2024) Are we doomed? Confronting the End of the World

How will civilization end? audio

For our project we chose to record a podcast in which we debated which of the existential threats that we have discussed in class is most likely to lead to the end of the world. Jason Sheppard thinks it might be Nuclear Annihilation, Benjamin Broner is terrified of Artificial Intelligence, and Alex Polissky is deeply anxious about Climate Change. 

Nuclear is particularly alarming due to its high probability, possibility of accidents, and current number of missiles. The threat of nuclear annihilation is estimated to be around 1% per year, which may not seem particularly high; however, over the course of 70 years, the probability of nuclear annihilation is greater than not. Even scarier is that for a nuclear armageddon, we do not need two countries to consciously decide to go to war. Instead a simple failure of a nuclear missile detection system, a malicious hacker, or even terrorists who manage to get their hands on one of thousands of nuclear warheads that exist around the world could set off a chain reaction that leads to the destruction of life as we know it. Although mutually assured destruction has prevented the use of nuclear weapons up to this point, if the pandemic has taught us anything, it’s that just because something seems very unlikely does not mean that it can’t happen. This combined with the fact that nuclear annihilation can be triggered by any number of small mishaps or targeted attacks means that the Nuclear Threat looms quite dark. During our debate a few counterpoints came up to the threat of nuclear, mainly two: the threat of a nuclear weapon detonation may be likely, but that threat combined with the probability of escalation required to result in changing life as we know it is quite low. This combined with the fact that the current problem of nuclear weapons would require 9 countries’ cooperation to solve makes it a far easier problem to solve than others. This is, of course, so long as it is in the forefront of our minds.

While we do not know much about the form it may take, artificial intelligence is problematic due to its unknown source and outcome, impact on human decisions, and potential for runaway AI power. The threat can take multiple shapes, the popular culture perspective in which an AI becomes so intelligent that it deems humans irrelevant and simply wipes us out. Another alternative which is currently part of our reality, is AI systems bending human behavior and making it worse, much like we have seen with Social Media. However, as Stuart Russeld points out, AI is a threat precisely because it is in its early stages of development. If AI is treated as a black box that is to optimize some particular variable, then it is possible that we humans might pick the wrong variables; however, if we construct a problem that optimizes for human outcome correctly, then we need not fear AI. Russell even points to example constructions and algorithms in his writing. Furthermore, there is an alignment between those who are concerned about AI, and those that are developing the bleeding edge of AI. This inherent alignment is notably absent from the threats of Nuclear Annihilation and Climate Change, where the perpetrators have no incentive to stop. 

This leads us to our final issue, the threat of Climate Change, which is an accelerating threat that is global and requires alignment from every country to effectively counter our emissions. From the Rolling Stones article that we read in class, 565 gigatons of carbon is what we can emit before we cross the 2C threshold, yet since 2012 when that article was published, we have not slowed our carbon production, and have emitted over 250 gigatons. Climate Change is an issue scientists have known about since the 1950s and has been in the public eye in one form or another since the 1980s; however, we have still not seen sizable policy to address the issue. A significant problem is that the status quo of fossil fuels has with it an immensely large financial incentive to continue destroying the environment. Oil reserves in 2012, if burned, would emit 2700+ gigatons of carbon. Furthermore this oil is already in play in the economy with money even being borrowed against its enormous $25+ trillion dollar valuation. All of this is before we get to discussing the agriculture industry or deforestation in places like the Amazon and others. It’s also unreasonable to expect developing nations to not increase their carbon emissions as that is often the quickest path to development and economic prosperity, putting even more of a burden to change policies, reduce emissions, and even capture emission in developed nations. The counter arguments to why climate change is not the paramount threat to humanity varied in our debate, the primary argument made was hope in technology. We already have much of the technology needed to not emit carbon, and can work on more for carbon capture. 

Although we may disagree on which of these three threats is the largest, our debate made us realize that our society often skips thinking about what could go wrong. We ignore the warning signs, and those who hold them, far too often. All three of these issues, and many more, require the ongoing pursuit of a solution, and dedication to political action. But this raises yet another question that we bumped into in our own debate: what is the right balance to strike between instilling hope and fear about these issues? Too much hope may lead to inaction, much like the technologist’s argument about climate change. Too little fear leads to ignorance of the issue, just as we have ignored the threat of Nuclear. We don’t have an answer to this balancing question, but perhaps the framework is not all too different from the way we motivate ourselves to do little things; we want to scare ourselves just enough, have enough light at the end of the tunnel, and have the right resources and we humans can accomplish a lot.

The reason why we were so excited to do a podcast as our final project was that it gave us an arena to have an open, informed debate that connected everything we have learned this quarter. While lectures and TA sessions have been amazing at discussing and analyzing individual threats, the podcast allowed us to analyze each threat in context with others. As society moves forward, we will need to keep in mind how all of these problems connect. For example, allowing an AI to monitor and control nuclear arsenals or help clean the environment in an effort to remove human error could create problems greater than the ones we tried to solve. While we didn’t come to a conclusion as to what the biggest threat was, it was valuable to hear the perspectives of our fellow classmates. An example here was how even though Ben and Alex have talked about AI in the past, it wasn’t until the podcast that issues like the dangers of geo engineering or how realistic a technical singularity point were discussed. As Ben put it “I’ve spent years worrying and dreaming about the technical singularity point but now I’m not even sure it will happen”. 

Listening to the podcast was also an interesting experience as we were able to listen to how we sounded. Each of us listened to ourselves and had things that we will take with us for future debates. Ben realized that he interrupted too many times and that was causing him to have worse rebuttals and also mess up the flow of the argument, while Alex realized it is important to understand material before you bring it up as evidence in an argument. 

Finally one thing that the podcast highlighted was the importance of diction and rhetorical skills when it comes to rallying the public for an issue. Regardless of how strong the facts are for something, unless there are advocates that can compel the masses to care about an issue nothing will happen. Sadly that may be a key issue now with science and politics, as we see some politicians that may be great at rallying crowds, but what they are advocating for may be detrimental to our world’s future. With all of this in mind, the podcast allowed us to delve deeper into this course’s material while opening up discussion for larger questions related to our current discourse.

Scroll to Top