Debate Two focused on the risks and dangers of the rise and popularity of artificial intelligence and of large language models, as well as the issues of creativity and innovation. This debate seemed to be less polarized than that of the first debate on the usage of AI in higher education. I concluded that both sides seemed to be in agreement on many issues and it was more of a matter of finding ways to regulate AI.
The affirmative side of the debate started by arguing that AI would bring more creativity and innovation than risks. They argued that although there are risks involved with the usage of AI, many of them are short-term and that with proper regulation, the issues would be minor. They categorized these risks in three categories: harm to people, harm to organizations, and harm to the ecosystem. Harm to people is quite straightforward and an example could be using AI to scam another person. Harm to organizations is defined as using AI to exploit the system and potentially use it to commit fraud. Lastly, harm to the ecosystem is a more broad term but can be defined as using AI to harm the natural environment as electricity is required to power the servers of LLMs. These sound quite dangerous to humanity, but the affirmative side argued that alternatives are just as risky as AI.
The opposing side argued that AI would pose more risks and dangers than creativity and innovation. Their opening arguments were that people will rely more and more on AI leading to a degradation of the thought process (unique to humans), that industries that focus on more creative work would be at risk from the increased usage of AI, and that AI could be used for violence. Another key point was the way AI and LLMs work, since they rely on training data to produce their output, it is difficult to create novel work. The negative side brought up some very valid points. They mentioned that writers and artists are already having their works fed into the training data without their permission. From the initial research for our group project, I am aware that the courts in the United States have not yet decided on anything. However, it does bring up some interesting implications and issues on what would be considered copyright and IP infringement. Another point brought up was the usage of AI in the military domain and how AI could be used to automate actions, thus, limiting human interaction during war or race wars.
I agree with points on both sides of the argument and firmly believe that both are important to identifying the best way to adapt to AI. The affirmative side had a very valid point on how some risks that are brought up now could be resolved with government regulation. That being said, the opposing side also had a very valid point on the issues with that. We can see that the government tends to be quite slow with regulation and that government officials are often uninformed on what is actually happening. An example of this would be with social media sites such as Facebook, Twitter, Tik Tok, etc. These have been around for quite some time, but only recently has the government began to intervene in the issues concerning censorship, privacy, etc. And even to this day, it does not seem to have reached a conclusion. However, banning the technology does not seem to be the right move either. It is important for the corporations and individuals creating and using AI to understand the ethical issues and inform the public on the best ways to use the technology.
One point that was mentioned was the idea of open source with AI being used to allow everyone to be given the opportunity to use the technology. Although I believe that it is important to allow others to use the technology, I wonder whether everyone would use it correctly. For example, will corporations use it to replace humans or will they use it to gain the upper hand in a market? Both of these could result in catastrophic outcomes. The opposing side had a very good point on the dangers of AI. The use of AI for military/violence and automation of decisions. This was a point that had not even occurred to me. I shudder to think of a world where decisions that can be used to wipe out millions of people are made using computers. Although one could attempt to program AI to output the decisions based on human ideologies, I am sure there would be many glitches. I am sure that humans can also make bad decisions, but there is something “human” to them. More specifically, machines cannot feel “empathy,” whereas humans tend to relate to other humans and may hesitate or attempt to avoid conflict. Another way that AI could be used to harm others would be creating dangerous pathogens or weapons. This is definitely a scary thought and I truly hope that there are not people willing to do this. However, it is important to raise issues now and to set up preventative measures. This is the case where open source is perhaps not the best idea. If we can limit who has access to these models to ensure they are not used with malicious intent, it may help. However, this point I believe could potentially be solved with governmental and corporate intervention. Back in its day, the internet was also an incredible innovation. I believe it is quite difficult to search the internet on how to create weapons of mass destruction or pathogens. There is probably a way around using the dark web, but we generally do not see this becoming a large issue because of the internet’s censorship on these questions. The same must be done with AI and LLMs.
Another point mentioned was the usage of AI in medicine with the affirmative side arguing how beneficial it could be. For example, AI being used to examine x-rays, MRI’s, CT scans, etc. It is often difficult to diagnose because of how minute some discrepancies can be. There are also stories of the general public using LLMs to self-diagnose their condition. Although I do see the benefit of AI in medicine, I do have some concerns on patient privacy. This is something very important and can be quite sensitive. Thus, I believe that there must be developments and controls to protect patient privacy and ensure that no identifiable information is leaked. In my opinion, another important issue is maintaining doctors, nurses, etc. I believe it is important to maintain human-to-human communication and connection. Personally, I would not like to go to the hospital and be greeted by a machine, diagnosed by a machine, and given a prognosis. However, I do see the benefit of using AI to help speed up diagnoses, especially in areas where there is a shortage of staff.
Overall, I believe that AI and LLMs will bring more creativity and innovation than risks and dangers. If used correctly as a tool, I believe that AI will help automate tasks that would allow humans to focus more on creative and innovative endeavors. For the success of AI, it is imperative that AI developers be transparent on the risks, dangers, and ethical issues, government officials and the general public be informed on the limitations and possible issues. However, with the right regulations, AI has the possibility to transform the world.