Last week’s debate concerned the prompt “AI Will Bring More Creativity and Innovation than Risks and Dangers”, with Colby, Fady, and Risabh affirming the claim and Louisa, Sophie, Seth and me opposing it. From the beginning, I was surprised by how opinions varied on specific topics related to AI, even within the teams. Both sides agreed that AI in its current state risked misinformation and had an issue of imposing on people’s rights, and both were aware of AI’s limitations in producing ‘creative’ results. However, the affirmative had a general consensus that these issues had potential to be solved as AI improved and progressed, and included some cases within the field of science where AI did, in fact, further and improve the field in places where humans either could not, or were less equipped to. The oppositional side was more wary of the idea that these issues relating to AI could truly be put to an end, and was more focused on current instances of people spreading false information, or their work being exploited through AI.
Colby raised an interesting point in his opening about making generative AI similar to Linux, which from what I understood, involved making generative AI an open source resource, while putting in some kind of rule or controls about how it could be used or implemented. The debate quickly went in a direction where I felt asking questions about it would seem distracting or irrelevant, but I was interested in learning more about what something like open source GPT distributions would look like. Risabh was particularly focused on the potential of AI innovation and creativity within science, and pointed out contributions AI had already made to the field.
However, multiple members of the oppositional side, in addition to some people from the affirmative team, were unsure of how applicable this case would be to things like AI image generation or AI writing narratives. Some, like Louisa, referred to the Alikhani et al. (2023) reading from last week, which found evidence that AI-generated descriptions and responses had a tendency to be more observational and focused on concrete aspects, while human-generated ones had a relatively higher tendency to describe feeling and mood. I pointed out how science had quantitative standards for its results, while standards for images and stories were more qualitative. While it was interesting to consider the creativity of AI in terms of its capabilities of solving problems humans struggle with, the debate didn’t come to a strong agreement on how to treat this information, which led to a discussion on how we were to define creativity.
The definition of creativity is itself a topic that people could spend a long time debating about. Some took it to mean the potential to create or give new understanding to a thing, while others emphasized an aspect of novelty or experimentalism inherent to creativity. However, there seemed to be a general understanding that creativity and innovation were interconnected, and that creativity helps contribute to innovation. Ultimately, we were not able to settle on one definition of creativity, though we were able to understand each other’s interpretations of it, and see how it contributed to their arguments regarding the potential of AI.
Another highly debated topic was the implications use of generative AI had for society. The oppositional side generally held the belief that efforts to counteract issues in AI, such as infringement of intellectual property and generation of misinformation, would always end up playing catch-up as people found ways to subvert safety controls. The affirmative side believed that even though these issues were currently prevalent, that did not mean it would still be five or ten years from now. I expressed that at least in the short run, there was not a strong enough effort to counteract the dangers of AI within the next two or three years, but given that technology progresses quickly, and sometimes unexpectedly, it would be hard to say what generative would look like in the long run. Some members of the affirmative suggested that government or military sponsorship of generative AI could allow for greater support in implementing safety measures, as the military had historically allowed for a lot of progress and breakthroughs in new technology. Seth, however, believed this might have issues relating to public security, especially if both the military and the public were allowed to use the same AI. All sides agreed, though, with Mozilla’s open letter that the option of banning or limiting AI was not the best way to solve the issues AI may bring.
Overall, I think the debate took a wider scope on the topic than expected, as we began discussing public policies, moral, and philosophical issues relating to the creativity of AI. I went into the debate with strong concerns about the potential of generative AI to cause negative changes to the creative industry, especially as someone who is involved with local, independent artists and who is receptive to their fears. However, I also realized my perspective of an average person’s understanding of AI was skewed by my personal experiences studying machine learning. I likely overestimated how skeptical an inexperienced person would be when receiving assistance from an AI. Additionally, the affirmative made a strong point that the future of AI may be brighter, especially considering how widespread it has become, and legislation that other countries have passed to attempt and control the risks. I have a little more hope now that protections would be instituted to help prevent the potential dangers of AI, and a reinforced belief that the public needs to be educated on how AI operates. This debate has also led me to have stronger faith in AI’s potential for innovation in STEM fields, though I am still extremely skeptical about its benefits for artistic endeavors. Considering the struggles creatives have always experienced in maintaining their livelihoods, even before the prevalence of AI, and the general lack of understanding the public has about AI, I remain convinced that at least in some fields, AI is more harmful than good.