BPRO 25800 (Spring 2021/Winter 2024) Are we doomed? Confronting the End of the World

      For this project, we wanted to take a new spin on the material we learned about in class. We have learned so much about the existential threats we face now, and we thought a new way to engage with the material was to spin the class material in a positive way. We hoped to present an optimistic view of the future – that we are not doomed, and in fact are less doomed than ever before. To present this view, we decided to do a podcast. In this format, we figured we could take a conversational tone and present multiple perspectives quickly and effectively. After all, the topic of existential issues is itself a conversation that is rapidly evolving. We hope that the podcast also offers a sense of levity to many of these issues, and we added humor throughout the podcast to add to the levity. All of these issues are, of course, incredibly important and deserve to be treated with seriousness, but a podcast and the humor that can be found in a podcast offered a great opportunity to have fun with the class material in a way that had not been undertaken yet in the class. 

     First, we discussed the value of nuclear energy in creating a sustainable future free from climate change and pollution. Energy is one of the biggest threats to human survival, because so much of our energy production relies on fossil fuels that are harmful to our natural environment through waste and carbon emissions that trap heat and warm the Earth. Luckily, we have access to Thorium reactor technology that circumvents the drawbacks of coal, oil, and natural gas. Further, technology in carbon capture and storage can actually allow us to actively reverse the effect of man-made global warming at a rate we are not capable of today. Thorium nuclear reactors and carbon capture can combine to create a carbon negative environment. Thorium reactors are promising because they use the waste from standard reactors as fuel for their own energy. Then the waste they give off is used to power the plutonium-uranium reactors. This self-sustaining nuclear ecosystem makes thorium the perfect candidate for parallel development alongside traditional systems. In the transitional period, the waste from both types of reactors can be used to fuel each other which creates a completely sustainable energy production process. Through a commitment to nuclear energy, we can make safer reactors that have failure rates significantly lower than Chernobyl and Fukushima, which were themselves still rare, freak accidents.

Increasing nuclear energy production allows us to move away from using fossil fuels.

     One of the main reasons we talked about being doomed in this class was due to pandemics, and coming off one of the scariest pandemics in human history, there is good reason to be worried. However, we wanted one of the main points in our podcast to be that we were not doomed from pandemics/biowarfare for a few reasons. We learned through the pandemic that social distancing and wearing masks works quite well to prevent the spread of the virus. Furthermore, social distancing is something that can be applied to any and all viruses that we face in the future, and as a whole we adjusted to living socially distant lives quite well. Never before in human history was it possible to deliver food, educate the population, and even vote at such a large scale while remaining socially distant. Technology has allowed us to maintain some semblance of a normal life even while we all quarantine in our own homes. Also, there has been incredible progress in vaccine development. Vaccines were in phase 1 trials by mid-March for Covid-19, a virus that was also declared a pandemic in mid-March. Further, mRNA vaccine technology has the promise to rapidly accelerate vaccine development in the future, and help us better understand how vaccines can be improved. The staggering success of the Pfizer and Moderna vaccines so far demonstrate this point, with both showing above 90% efficacy in many studies. With respect to biowarfare, which is a similar existential threat in many ways, we discussed why it is an unlikely threat for two reasons. First, it is an extremely risky weapon to use, since it can easily spread to your own population. If we presume that Covid-19 indeed started in a lab, this is a great example of how uncontrollable a biological agent can be. Second, we can defend against a biological agent the same way we defend against a naturally occurring virus – with social distancing and similar social measures like wearing masks. All in all, a pandemic or biological weapon does not pose a severe existential threat to humanity. Furthermore, recent medical advances offer so much promise for humankind. Three of the most important ones that we talk about in the podcast are genetic vaccines, wearable medical technology, and protein network research. In a pragmatic sense, this technology will allow the innovators of humanity to live longer, fuller lives, and solve many of the existential issues we have covered in class. On a more individual level, these innovations will improve the quality of life for billions around the world in time, and make them lead happier, healthier lives. On an individual level, we have never been less doomed. 

Vaccine development and distribution in the US makes us hopeful for rapid pandemic response in the future.

 

     Artificial Intelligence is another potential threat to humanity, but thanks to intense scientific research since the invention of the modern computer, we know now that the most dangerous threats posed by AI can be easily prevented because the tools necessary for computers to attain dominance over mankind are typically very theoretical or impractical with many weaknesses. Basically, computers are never going to gain sentient intelligence on their own, and will never experience consciousness like we humans do. Because of this, we must align computer’s preferences with humans and work to create safeguards that evil humans cannot possibly alter the robot’s internal framework creating a rogue AI. Because “Value Alignment” is essential to creating AIs that won’t harm humans, intentionally or by accident, we must monitor possible actions by human terrorists to attempt to create human-hostile robots that are programmed to be killing machines. Nowadays we have drones that can do the killing for us, keeping our soldiers out of the way of harm, but all of these drones are human-controlled and do not target people indiscriminately – they have carefully pinpointed locations that they strike. Humans will benefit from computers so much more than they can be harmed. The increased interconnectedness, productivity, and leisure that automation provides will send us forward to a new generation of freedom and learning because of the free-time created by computers. The only concern left is that we reward humanity for automation, rather than rewarding those with concentrated capital that are often the owners of the means of production.

     We hope this podcast shines a positive light on the material covered in class, because there is a lot of positive to modernity that is tangentially related to the class. We hope to faithfully present an optimistic light on the quintessential “Are we doomed?” question, and we hope you enjoy the podcast.

Scroll to Top