BPRO 25800 (Spring 2021/Winter 2024) Are we doomed? Confronting the End of the World

With our final project we wanted to tackle the question what is personality? And how our understanding of personality can be applied to AI. Using a video format we show how AI, specifically neural networks, behave in a similar way to the nurture aspect of personality building. Moreover, in our explainer video we show AI models can already reliably predict our personality traits using just selfies or by tracking our eye movements, otherwise inferring huge amounts of information on our personality from very little data.

We started with a simple example of emerging personalities with AI, chess computers. AlphaZero is an AI developed by DeepMind to specialize in learning how to play two-player, alternating-move games. It was able to defeat existing top engines while performing far fewer searches per second. AlphaZero’s excellence comes from the use of neural networks. Rather than providing a set of rules, AlphaZero learned to play chess by playing itself millions of times. Moreover, its playing style was described as risky, aggressive, novel, and amazingly enigmatic.

Next, we tackled a more complex issue, visual cognition. Exploring the history of Captcha’s we realized that this provides a lot of insight into what AI can accomplish with a vast enough data set.  At first a simple turing test to distinguish humans from bots using warped text, Captcha responses built massive training datasets that quickly made the bots better than us at the very same task. Then Google switched to transportation images to help train its driverless cars, but already the bots are outperforming us, so now the CAPTCHAs are becoming non-interactive, evaluating us from bots based on how we interact with websites and media, in other words, evaluating us not on answers, but on patterns of behavior.

We then explored the implications of AI taking a unary rather than binary approach as described by Stuart Russell. The first consideration is called the Black Box of AI. After being coded, neural networks rearrange themselves to accommodate new information, altering relationships between code snippets the same way our brains strengthen or weaken paths between neurons. The Black Box issue has led to initiatives like XAI by DARPA which uses AI to help explain AI to us. In other words, code one model to analyze and verbalize the other. So far, with Chess and Go games, the stakes are low, but what about not understanding the AI models when the stakes are high? Such as driverless cars, medical procedures, and even court proceedings?

And finally, we examined the greater implications of this on society, notably in art and “general intelligence.” When thinking about art, a lot of stake is placed in the intentions of the artist, the novelty of an idea, and the idea that someone is trying to communicate something to us. But how do we interpret art when it comes from AI? We can put a monetary value to these works but can to what extent can we infer personality and expression behind the work? If AI models are being entrusted with larger and larger responsibilities, but neither the AI nor its creator can explain why it chose to perform a certain action because of the black box, then how do we assign responsibility in a court setting? Who is on trial? Rather than deciding this ad hoc as the moment arises, these are questions we need to address now. What is AI personality? Does that equate to Personhood? What happens when AI understands us better than we understand it?

The link to our final project on Youtube is listed above and our sources are available in the presentation. Thank you!

Scroll to Top