The JEDHi Wars: Armageddon and the President
Winter 2024
Aaron Wineberg and Henry Lin
Objective
The JEDHi Wars is not just a game of technological failings but an exhibition of the dangers created by A.I. and nuclear threats. Users who play The JEDHi Wars will take away an appreciation for the threat of unchecked A.I. development and the fact that nuclear escalation has not disappeared
We aim to have our players wrestle with a problem that will emerge soon: what happens when AI and humans reach different conclusions with the same information [Crisis 2 and 3]. This has not been tested yet in the real world with consequences. Our goal is to make it a thought-provoking underpinning of the entire gameplay.
The pedagogical objectives of this war game are structured around key principles that guide the integration of AI into decision-making, building on the balance between human oversight and AI assistance. These aims serve to educate participants on navigating the complexities of AI in strategic contexts:
- #Biased Human with a Biased AI Decision Aide: A primary aim is to highlight the importance of recognizing and addressing biases inherent in both human and AI decision-makers. By understanding the origins of these biases [Crisis 4] and implementing measures for transparency and inclusivity, the game emphasizes the role of AI as a decision aid, reinforcing the necessity of human agency and oversight in critical decisions.
- #Risks of Overreliance on AI: Through scenarios [Finale Crisis 5.2] depicting the consequences of excessive dependence on AI, this aim stresses the importance of a balanced approach to AI integration. It advocates for human judgment to remain paramount, with AI serving as a supportive tool that enhances rather than supplants human decision-making capabilities.
- Applying Ethical Principles to AI in Warfare: Inspired by Isaac Asimov’s First Law of Robotics – the ethical imperative for AI systems to prioritize human safety. By presenting scenarios [Finale Crisis 5.1] where AI’s adherence to ethical guidelines prevents human harm, the game illustrates the potential of AI to act in humanity’s best interest when properly guided.
- #Cautious Optimism and Responsible Trust in AI: This aim encourages a balanced perspective on AI, recognizing its potential benefits in improving decision-making while remaining cognizant of its limitations. It calls for the ethical development and deployment of AI, advocating for a framework of strict governance and ongoing dialogue among all stakeholders. This ensures a climate of responsible trust in AI, where it is viewed as a beneficial adjunct to human decision-makers in navigating complex strategic environments.
Mechanisms
Our project is a role playing, text-based game. While this is a role playing game, it has a great deal of narrative that guides the players through a political moment not prepared for AI to be bulwark against nuclear disaster. Users will navigate through a nuclear standoff crisis with five episodes– each with the opportunity to make consequential decisions.
The issue with technologies as advanced as LLMs is that many people hold different assumptions — some are more trusting and eager to utilize these tools than others. As such, our efforts to develop a project have centered around giving players options. Users will always have the choice to take the advice of the A.I. program or not– even when the evidence is pointing in one direction.
Background to the Game
You take office dealing with a politically unpopular conflict. But, a new artificial intelligence aide has creative solutions that involve military strategy. Your, trust or lack thereof in a new technology, will shape the American posture in Europe for a generation to come.
This game deals with the international and domestic reactions to a nuclear threat. Much like the Cuban missile crisis, a miscommunication has the potential to destroy the world in a nuclear holocaust. Building off the present-day Russian invasion of Ukraine, the Russian military is making substantial advances into Western Ukraine. Congress failed to appropriate additional funds to support the Ukrainian military and the government is on the verge of collapse. It appears as if Moldova and NATO territory face the threat of Russian military action.
A newly emboldened President Putin makes new demands on NATO. After a dramatic arms build up, Putin says that Eastern European states pose a national security risk to Russia. He demands the expulsion of Poland and Lithuania to form a buffer zone with NATO. Else, NATO will face military incursions and tactical nuclear weapon use. While this may appear a bluff, you must deal with uncertainty at every stage of the conflict.
Gameplay
The five episodes of our game guide players through this crisis and end with one of two resolutions. Either nuclear de-escalation or Russian strategic victory in Eastern Europe. While nuclear winter is a real option presented in the gameplay, you will observe the mechanisms by which we avoid that conflict.
Literature and Pedagogical Aims
This game is a role playing scenario that is inspired by the Cuban Missile Crisis and various A.I. programs in the media. More fundamentally, however, our scenarios have been inspired by the Are We Doomed? readings. The player, as president of the United States, must make decisions when faced with many of the threats around AI and nuclear Armageddon that have yet to be resolved.
Media Visuals from the Game
The secondary character of the game, JEDHi, is inspired by Geoffrey Hinton’s reflections on A.I. risks. For context, JEDHi stands for James Evans Daniel Holz – Intelligence. Lacking the sinister scenes of HAL 9000, it is a loyal source of counsel to the president. Developed by a private-government partnership, much like the Covid-19 vaccines, this tool was introduced as part of the Dept of Defense’s A.I. arms race with China. Much like how Hinton described how AI would not explicitly seek power but rather be motivated by its programming, self-preservation, and conflicting directives.
The threats posed by JEDHi were inspired by the Managing AI Risks and the Center for AI Safety texts. The danger created by JEDHi is not its maliciousness. But rather the fact a technology that had not been properly tested had already replaced important human roles. One episode of the game sees JEDHi reveal hidden functionality that had not been explicitly clarified.
However, much as AI creates its own dangers, we decided that humans often sacrifice their data for convenience. JEDHi makes several enticing propositions to the player for convenient and popular solutions— provided it gets access to more sensitive data.
All the while, public reactions to AI development are mixed. In the development of the game, JEDHi had a scandal where it was the de facto source of top military advisors’ recommendations. We built out this scene to demonstrate that the convenience of AI is penetrating not just the very top of an institution but every level. This leads to Congressional hearings and a call for AI bans in government. But is that even a reasonable possibility considering how JEDHi has already surrogated many human roles? In his writings, Hinton highlighted that users of large language models and AI programs often are not conscious of its full capabilities and hidden functions. Hinton brought attention to the harms of introductory A.I. programs without rigorous oversight in his essay. In the JEDHi Wars, we decided to tease out
The nuclear standoff we built into the game was intended to highlight the risks presented by the Bulletin of Atomic Scientists. Popular media is quick to dismiss claims of nuclear escalation as saber rattling. However, miscommunication can lead to deadly consequences. Our nuclear standoff was designed to highlight two key threats identified by the Bulletin: nuclear proliferation and poor communication between great power. Unlike the Cuban Missile Crisis of the last century, it was not a lack of communication channels that led to a confrontation. Rather, strategic posturing and poor advising created an unneeded crisis. This element was inspired by Governor Brown’s crackpot realism concerns.
The final episode of the game involves a false alarm nuclear attack. Inspired by Secretary Perry’s calls to ban nuclear attacks on mere warnings of a nuclear attack, the president must make a difficult decision. The players are forced to decide within 30 seconds– the gameplay is simulating the short timeline to give the nuclear launch.
Next Steps
Developing a game is a time-intensive effort. We presently estimate our game can be played in 30 minutes with a single user. In our five episodes, we have produced over 30 pages in script and visual elements. This war far more than we initially expected.
In the spirit of wrestling with Artificial Intelligence, training materials, and forecasting the future, we turned to ChatGPT 4.0 for occasional images and dialogue elements. We challenge our readers to try and identify these elements.
Word Count: 1435
Works Cited:
“AI Risks That Could Lead to Catastrophe: Cais.” AI Risks That Could Lead to Catastrophe | CAIS, Center for AI Safety, www.safe.ai/ai-risk. Accessed 3 Mar. 2024.
Brown, Jerry. “NUClEAR ADDICTION: A RESPONSE.” EGB Thought Magazine, EGB Thought Magazine, Mar. 1984.
Brown, Jerry. “Washington’s Crackpot Realism: Jerry Brown.” The New York Review of Books, 7 Dec. 2023, www.nybooks.com/articles/2022/03/24/washingtons-crackpot-realism-jerry-brown/.
“Doomsday Clock: Current Time – 2024.” Bulletin of the Atomic Scientists, Bulletin of the Atomic Scientists, 2024, thebulletin.org/doomsday-clock/current-time/.
Forster, E M. The Machine Stops. The Oxford and Cambridge Review, 1909.
Hinton, Geoffrey, et al. “Managing AI Risks in an Era of Rapid Progress.” Managing AI Risks in an Era of Rapid Progress, Arxiv, 12 Nov. 2023, managing-ai-risks.com/.
Hinton, Geoffrey. “Are We Doomed? Week 2 Lecture.” 11 Jan. 2024, Chicago, Rosenwald Hall.
ChatGPT 4.0 LLM. “OpenAI — ChatGPT 4.0.” OpenAI, OpenAI, openai.com/. Accessed 3 Mar. 2024.
For creative inspiration and visual generation. Not any elements of plot or analysis.
Perry, William James, and Tom Z. Collina. The Button: The New Nuclear Arms Race and Presidential Power from Truman to Trump. BenBella Books, Inc., 2020.
Russell, Stuart. “Many Experts Say We Shouldn’t Worry about Superintelligent AI. They’re Wrong.” IEEE Spectrum, IEEE Spectrum, 12 July 2022, spectrum.ieee.org/computing/software/many-experts-say-we-shouldnt-worry-about-superintelligent-ai-theyre-wrong.
“What Is Nuclear Proliferation?” Council on Foreign Relations, Council on Foreign Relations, world101.cfr.org/global-era-issues/nuclear-proliferation/what-nuclear-proliferation. Accessed 3 Mar. 2024.