In week 4, I wrote my memo starting from the idea that I have a difficult time conceptualizing of the existential threat which AI poses, particularly compared to the threat of nuclear armageddon and of climate change, which I find incredibly salient. I postured that this was at least in part due to the fact that films about climate change and nuclear war play off of the reality of the situations, whereas when it comes to AI there is no current-day threat posed by AI and the threat which AI may pose in the future is likely a far cry from the dystopian robot overlords of Terminator and I-Robot.
With my project I wanted to find a way to extend these ideas and to look into academic writing on the topics of salience and risk perception as it pertains to these existential threats (AI, climate change, and nuclear war). Through this I was able to confirm my earlier suspicions that not only does media heavily influence the ways in which people conceive of these existential threats, but that the highly imaginary, fictional understanding of the threat which AI poses is widespread in the general population. I think the most important realization I had while writing this piece was a quote from one of the studies I read, “Salience exaggerates people’s propensity to act in whatever direction they already would tend to act”. In other words, for someone who has anxieties about the future of AI, as much of the general public does, movies like Terminator and The Matrix only serve to further amplify their anxieties about AI, despite the fact that they are not grounded in either the reality of today, or the likely future of AI.
(Citations are included at the end of the pdf of my op-ed)
Thanks!