BPRO 25800 (Spring 2021/Winter 2024) Are we doomed? Confronting the End of the World

A Daily Log

To access this project: click here.

This project is a daily log from the perspective of a Russian industrial spy. I have named him Arkhipov, after the man who averted a Russian nuclear strike in 1962.

As the log begins, Arkhipov is facing a difficult situation. Due to a raging global pandemic in the year 2035, he had been locked down for months within the American AI company he is spying on. His cover is under threat and his suspicions are growing by the day.

At first, he simply perceives himself as fetching trade secrets for his nation. But as time passes, the intimate setting of the locked-down company begins to reveal hidden mysteries. Arkhipov realises that something far more sinister is going on, and as the threat of being discovered heightens, he begins to face serious ethical dilemmas. In particular, he wonders whether his discoveries should be revealed to his own country at all.

The AI Company: Silver Inc.

The company at which Arkhipov works is named ‘Silver Inc’. It is an advanced AI company that already holds many scandals to its name: coercive algorithms, electoral interventions, advanced deep-fakes, and other questionable practices. Arkhipov’s role is to acquire knowledge of Silver Inc.’s progress, and to report this back to Russia, securing its competitive edge.

When Arkhipov is asked by the CEO to assist the company in solving theoretical containment issues, Arkhipov cops on: Silver Inc. is creating, or has already created, a superintelligent general artificial intelligence. It is called ‘Minerva’.

As Arkhipov makes this discovery, he spirals into existential angst, musing at the profound ethical, political, and economic dangers this poses. He asks himself whether this system is already in operation (Tegmark, 2017), whether it is appropriately contained, whether it is value-aligned (Yudkowsky, 2016), and most importantly… who he can possibly turn to.

The Pandemic: ‘The Bleed’

As the log opens, the pandemic known as Levir-3 has paralysed the global economy. It is for this reason that Arkhipov is largely trapped: leaving the base would likely require millions of dollars or a state-organised rescue. Levir 3 is highly infectious and lethal respiratory disease, with an R number of 10 and a daily US death toll of 6500, which rises rapidly through Arkhipov’s logs. Levir-3 is un-affectionately nicknamed ‘the bleed’ – a reference to its onset, which is often predicted by a sudden bout of nosebleeds. 

The government has shared little information about the virus, but has instituted an intensive stay-where-you-are lockdown, policed by drones, with food rations dropped at residential locations. Arkhipov regularly misses resources in short supply. While the virus supposedly originated from horses in Eastern Mongolia, the head of the CDC doubts this theory, arguing that viruses simply take too long to become this lethal in the jump to human vectors.

The virus and the technological risk complicate each other in this story. For one thing, the virus distracts global governments from their oversight efforts: it is no coincidence that Minerva has been created during this time. Moreover, the virus complicates Arkhipov’s ability to rapidly escape or maneuver in the world. He is forced to contend with many profound ethical challenges in isolation, lamenting the absence of global governance or reliable regulatory bodies.

Challenges Presented

Aside from the “risk-multiplying” feature of a global pandemic, the story highlights many challenges in the fight against existential risk from AI.

  • The containment problem: Not wishing to give too much away, the daily log of Arkhipov highlights the profound challenge of containing a superintelligent AI (Babcock, J., Kramár, J., & Yampolskiy, 2016). In the story, Minerva uses a combination of technical and psychological approaches to planning its escape. Even Arkhipov, an expert in these issues, struggles to account for every possibility 
  • The Surveillance Problem: Second, the story highlights the problem of surveillance. Technological dangers often emerge at a microcosmic level: small events in small companies can have lethal global impacts. Arkhipov was a spy locked down with Silver Inc. for months before even discovering the superintelligent system within its walls. It seems the level of surveillance required to police an issue like this is virtually impossible. The reader is left to ask: What does oversight mean in the context of AI risk? (Ord, 2021)
  • The Response Problem: Third, the story highlights Arkhipov’s loneliness. Much like the historical Arkhipov, he has no one to turn to in his lone effort to protect the world from the danger he has discovered. He fears a superintelligence no matter who possesses it, whether Silver Inc. or his home country. This leads us to ask: If someone does uncover an existentially dangerous technology, what international body is safe for them to turn to? (Torres, 2018)

Value

The value of this project is to be found in its semi-realism. While the story is at points outlandish, it highlights a very real, almost farcical aspect of existential risk: an individual can be looking straight at a problem like this, and have it in their very own building, and still have no idea what to do. This more than anything highlights the vulnerabilities in our present global planning for existential risk: it relies too much on individuals. Arkhipov has no recourse: discovering the superintelligence during a global pandemic, his own life is instantly in danger. He must bear the full burden of existential risk on his own.

As Arkhipov’s name emphasises, this is simply not the first time this has happened – and it will not be the last. To combat existential risks, our faulty system relies on the presence of intelligent, conscientious actors at every turn. Arkhipov’s name foreshadows this problem with nuclear risk: our nuclear failsafes rely in many cases on the individuals involved being sane, infallible, consistent (Baum et. al, 2018). The story itself highlights this problem with technological risk: its microscosmic nature means surveillance relies once again on individuals, this time, individuals who just happen to be close by when the product is created (Bostrom, 2019). The backdrop of the story highlights the same problem in pandemic risk: Suspicion around the origins of Levir-3 stems from the possibility of an intentional or accidental lab leak (Millett and Snyder-Beattie, 2017). The prevention of this once again relies on individuals being trustworthy and reliable in their scientific protocols.

Overall then, this story is about the burden of existential risk falling to the few and not the many. It is the tale of a lone individual, who may someday yet exist, uncovering an existential threat and facing it entirely alone.

Bibliography: 

Babcock, J., Kramár, J., & Yampolskiy, R. (2016, July). The AGI containment problem. In International Conference on Artificial General Intelligence (pp. 53-63). Springer, Cham.

Baum, S., de Neufville, R., & Barrett, A. (2018). A model for the probability of nuclear war. Global Catastrophic Risk Institute Working Paper, 18-1.

Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy, 10(4), 455-476.

Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.

Torres, P. (2018). Superintelligence and the future of governance: On prioritizing the control problem at the end of history.

Millett, P., & Snyder-Beattie, A. (2017). Existential risk and cost-effective biosecurity. Health security, 15(4), 373-383.

Ord, T. (2021). The Precipice: Existential Risk and the Future of Humanity, Bloomsbury, 2020. Ethical Theory and Moral Practice, 1-3.

Yudkowsky, E. (2016). The AI alignment problem: why it is hard, and where to start. Symbolic Systems Distinguished Speaker.

Scroll to Top