BPRO 25800 (Spring 2021/Winter 2024) Are we doomed? Confronting the End of the World

Earthworks: A Short Interactive Experience

We are but ants in the eyes of a greater intelligence. 

Throughout the length of this course, one particular issue became exceptionally obvious: it’s very, very hard to convince people, even extremely smart and well educated people, of the severe danger posed by particular existential threats. Specifically, the relatively abstract threat of Artificial Intelligence (among other equally pressing threats) received an order of magnitude fewer votes in our weekly polls than climate change for the cause “most likely to doom us.” While there is more than sufficient grounds for a discussion on the relative severity of the existential threats covered by the course, such a lopsided perception of the risks facing humanity is a painfully obvious indicator of colossal discrepancies in salience between existential threats. As someone who has been concerned with the existential implications of Artificial Superintelligence for the better part of the past decade, it’s difficult to see past my own biases on the subject in order to understand the perspective of those who would rank climate change as a more pressing threat than AI.

After bringing this concern up during our discussion section, the resounding response was this: threats such as AI and nuclear war are, for whatever reason, not nearly as salient as climate change. Given that I am deeply immersed in scifi and surrounded by clearly salient and immensely persuasive arguments about the danger posed by AI, this response was initially somewhat difficult to decipher. How could AI not be a salient threat? Who hasn’t seen at least one Terminator movie? But I came to realize two things. First, most people are not exposed to arguments about the existential threat posed by AI, or at least not very effective arguments. For some, the papers we discussed and Prof. Russel’s presentation may be the only substantial interaction they’ve had on the topic. Second, and as is to be expected, the pop culture discourse on the topic of AI existential risk is not particularly persuasive. The Terminator franchise is just too absurd for most to take it seriously, and the central epistemological revelation of the Matrix is completely tangent to the backdrop of an AI apocalypse. There are, of course, much better and far more salient discussions of AI – Westworld, Ex Machina,  – but they’re much further out at the fringe of mainstream pop culture. Conversely, mainstream coverage and discussion of climate change is almost suffocating in its prevalence: the public discourse is completely saturated and awash in climate change. To not recognize that climate change presents at least some existential risk to humans in 2021 is to live under a rock. From this perspective, it’s easy to see why climate change consistently ranks first, by a wide margin, in our weekly polls. And it’s precisely because of this disparity that concerted efforts must be made to insert discussions of AI existential risk into the public consciousness.

To this end, I wanted to communicate, as effectively as possible, the nature of the existential threat that AI poses. In order to do that, it seemed clear that the medium in which the argument was presented would be essential in the efficacy of the end message: if movies, academic papers, popular articles, and notable speakers failed to convince the class of the severity of the AI threat, it would be naive to assume that repeating the message in those same mediums would serve to make the issue any more salient. Throughout my life, I’ve always found that video games have a sneaky but extremely effective way of presenting a persuasive and nuanced argument or perspective. After all, there’s strong reasons to suggest that inhabiting a character and literally seeing the world from their eyes is conducive to empathizing and understanding the challenges that they face. Instead of imagining what it might be like to walk a mile in another’s shoes, you can actually do it. So what better than to create a short video game experience, in which the player can feel the predicament of mankind in the face of Artificial Superintelligence.

Earthworks, a short interactive experience built in Unreal Engine 4, attempts to communicate the existential risk of AI to the player through a metaphor, one often used to respond to arguments for the potential benevolence of superintelligence. [Spoiler Warning] Set in a park, you first play as a Bulldozer operator, completing the simple task of moving large boulders to the back of a nearby dump truck. But, abruptly, your screen fades to black, and you find yourself in the body of an ant in that very field: your objective is to survive. It is, of course, futile – you are quickly crushed by a number of AI operated bulldozers. The core message is a straightforward one: AI is to mankind as mankind is to ants. While operating the bulldozer, the player is completely unaware of the ants: and even if the player were to see an anthill, such a revelation would almost certainly have no effect on their willingness to carry out even such a menial and superficial task as removing boulders from a small park. Fundamentally, ants do not 

even enter the calculus of utility or morality made in the process of human decisions. The underlying reasons for this reality are myriad: ants pose no real retaliatory threat, they’re difficult to see, have no way of communicating with us, possess (relative to us) very little intelligence, and are unfamiliar or alien in almost every meaningful dimension. Furthermore, we often have strong incentives tied directly to human wellbeing or suffering when we mercilessly kill ants: maybe we’re constructing a road to a nearby school so children can receive and education, or maybe we’re connecting a remote town to a local hospital so families can better access vital healthcare resources. We should kill ants for these reasons. And it’s exactly these same reasons that an AI may apply to our own existence. In this way, I hope to communicate more effectively this core message: In the end, we are but ants in the eyes of a greater intelligence

Scroll to Top