Will AI bring more creativity and innovation than risks and danger?
In considering this question, the affirmative side began by partially reshaping the question. As I was the first person to voice my opinion, I will begin by the way I reframed the question. I conceded that AI does not currently prevent much creativity or innovation, however, I also argued that now it does not produce much danger. Thus, I made the argument focus on the future as the future is where much of the possible danger and innovation lie. With this, I argued that we can shape the future to our liking by developing an environment that would prevent the dangers and risks brought on by the development of AI while fostering innovation in the AI space.
The next person to speak in the affirmative followed a similar frame of reasoning. Fady argued that the current dangers presented by AI are blown out of proportion and that there is little evidence showing that LLMs present a current risk to society. From this, Fady supported the argument that the current risks and dangers of AI are minimal.
Rishabh built on this. Rishabh structured his argument around the concept that risks could be minimized and that risks are short-term things that can be mitigated by a regulatory framework. Rishabh further argued that these risks are not inherent to AI but are instead about how AI is used. Thus, if AI is properly managed it carries no risks.
Ultimately, all the individuals who argued the affirmative agreed that the current risks of AI are minimal and that the risks of the future are not due to an attribute of AI but instead due to ways it could be used. Thus, the affirmative argued that the trajectory of AI and its risks could be altered by regulation and control.
While this argument was a compelling reason for why future dangers of AI can be mitigated, it does not clearly outline why AI does not have many current risks. The negative side, however, failed to focus on the gaps within the affirmative’s argument and instead outlined future risks of AI.
Louisa presented the first argument for the negative side. She argued that AI, specifically LLMs, will be utilized as a replacement for crucial brain activity. Louisa argues that LLMs are different from calculators because the LLMs are used to solve more complex problems and replace original thought. Ultimately Louisa concluded that LLMs and AI as a whole present a great risk to original ideas.
Atticus shared a similar concern with AI. Atticus argued that AI presents a great risk to creative spaces and that it currently takes away from the value of creative concepts by utilizing products of creatives for the training of LLMs without the consent of said creatives. While Atticus conceded that this was not necessarily a risk, they argued that it was a moral issue as it exploits creative people and infringes on the rights of creatives. While Atticus did not explicitly draw the connections between the moral issue and future risks, it was implied that this moral issue would present a risk to the livelihood of creative communities.
Sophie continued the focus on AI’s role in creativity but instead focused on AI’s lack of creativity. Sophie argued that AI mimics human processes of creativity but lacks the “human touch.” Sophie argued that there is no real intelligence in AI and that AI cannot think outside of the box. Further, Sophie argued that AI is goal-oriented, which will prevent AI from making any discoveries. While this argument is a fair analysis of AI now, it does not consider the potential of AI. Considering that Sophie’s colleagues were focused on future risks it seems unfair to deem AI as not innovative based on current abilities and dangerous on future abilities. Thus, Sophie’s contribution failed to logically fit in with the greater structure of the negative argument.
Finally, Seth concluded with some very valid current and future concerns about AI. Seth stated that he agrees with open AI policies as a whole but is concerned about a world without standards and procedures. Seth conjured a view of a world in which AI is used to shorten the kill chain of decisions and thus a world in which fewer humans are involved in important decisions. Seth expressed concern that such a world would mean that fewer people hold control over the world and thus that power is more greatly concentrated than it is at the current moment. Along with this, Seth expressed concerns about how AI could be used for misinformation.
These concerns were the most valid of all concerns, even still they were not founded in the present condition. Misinformation is already widespread on social media without the use of AI and we have not yet seen AI cause a major change to this disinformation. Along with this, the use of AI to close the kill chain of decisions is likely not in use as multiple papers showcase the widespread tendency of individuals to distrust algorithms making decisions instead of humans even when the algorithms are more accurate than humans. It is thus reasonable to conclude that if humans are replaced by AI for certain decisions, the AI is near perfect in decision-making and far better than a human.
However, a concern that is tangential to this is if AI is developed not to make an educated decision but instead to follow an order. If this were how AI were developed it would present a very real concern.
Furthermore, likely, there is not a present risk presented by AI in the kill chain of decisions. Thus, Seth’s concerns are, like his colleagues, focused on future potentials and not the present situation. Ultimately, the negative side failed to exploit the flaws of the affirmative’s arguments in the opening statements and instead argued within the frame that was introduced by the affirmative argument.
This trend of arguing within the frame of the affirmative continued into the back-and-forth section of the debate. During the back-and-forth section of the debate, the affirmative side continued to press the questions of how to set up regulations to prevent the risks of AI and pushed the negative side to consider whether the risks of AI are inherent or due to external causes. However, the negative side was not able to answer these questions and instead focused on different possible causes for concern. Ultimately, much of the back-and-forth debate involved the negative side relying on periphery arguments that focused on smaller details while the affirmative side pushed their narrative that risks can be mitigated because the risks are neither inherent nor current.
Ultimately, I believe that the affirmative won this debate. While I was not a neutral bystander in this debate, I think it was clear that the affirmative had a well-structured argument that shaped the form of the debate and that the negative failed to point to the flaws in the foundation of the affirmative argument. I agree with the affirmative argument, however, I believe that the argument would be better if it could more clearly articulate why the present risks of AI are not actually risks.