On Wednesday, October 25th, 2023, we had our first in-class debate on the impact of AI on education. The debate prompt is:
“Will ChatGPT and other similarly powerful AI tools have positive or negative impacts on higher education? In your opinion, what Andrew Abbott’s criticisms in 2016 would hold or not hold for accessing today’s large language models like ChatGPT?”
Participants were divided into two sides: the “affirmative” side consisting of six students (Yushu, Leo, Vincent, Isabella, Melanie, and Jimmy), and the “negative” side consisting of five students (William, Alex, Luke, Kelly and Dean). The debate process consisted of an opening round with each student making an opening statement prepared in advance (the students went the order listed above, affirmative side going first), a free-fire round where participants responded to each others’ comments, and ending with a concluding remark (with short discussion breaks in between the rounds). This report gives a brief overview of the debate process and arguments given by both sides, with parentheses at the end of each point listing those who had similar thoughts (as far as I can remember).
The arguments given by the affirmative side are roughly: (1) AI should be viewed as instruments, tools and assistants (Leo, Isabella, Jimmy), and it can help with many auxiliary tasks that constitutes learning and teaching (; (2) it would be a slippery slope to think of AI involvement as replacement (Jimmy), and that many concerns come from such incorrect attitude or false expectations of what the AI does or can do (Isabella); (3) AI is a fast growing field, many of the current problems of AI application are merely technical problems rather than conceptual problems (Yushu, Jimmy), and some concerns in higher education are in fact present before AI (e.g. plagiarism, student attitude), with the incorporation of AI only possibly exposing the problem instead of creating them (Vincent).
The arguments given by the negative sides are roughly: (1) AI tools would encourage students to take the cheap way out and increase their over-reliance on them as they are marketed as commercial products, one point mentioned in Abbott’s lecture (William, Kelly), and younger students might be especially vulnerable as it is harder for them to see the long-term benefit of not relying on AI tools (Dean); (2) The commodification of knowledge also discourages associative learning, discursive learning and active engagement of knowledge, which Abbott considers as essential to approaching knowledge (William, Alex, Kelly); (3) Tools like ChatGPT only reflect common ideas rather than accurate reality, and we demand things that it is not capable of doing, taking its output as true knowledge (Kelly); (4) finally, some damage has already been done by the incorporation of AI in education without careful consideration, such as potentially lower quality of the college applicant pool (Luke).
After the initial presentation round, the two sides enter the free-fire round. Here are some selected exchanges of viewpoints presented during this round, with “(A)” denoting the point being from the affirmative side and “(N)” negative.
Vincent (A) thinks that incorporating AI would expose some problems faced by the current educational system and radicalize it, forcing a “paradigm shift,” changing the ways we evaluate students, which is making progress in the long run although it could seem problematic in the short term. William (N) responded that ChatGPT by itself is not able to lead a paradigm shift itself, because it merely mirrors what has been said. Jimmy (A) responds that even in Kuhnian terms, it is people, not machines that lead the paradigm shift, considering the facilitators.
The affirmative side responds to the negative side’s comment about the low quality of outputs from AI, asking that “if the outputs are low quality, how can it endanger higher education?” Kelly (N) responds that “higher education is not designed for mediocrity,” and that the AI output is less likely to promote truly distinguishing or creative thinking.
The affirmative side points out that although the corpus is biased and may have alignment problems or hallucinations, they can be addressed by fine-tuning processes (echoing the point about technical vs. conceptual challenge for AI). Luke (N) replied that these are limitations that companies like OpenAI itself is trying to address in order to improve ChatGPT’s ability of cognitive offloading, which would further worsen the problems of discouraging students to pursue effective learning.
Alex (N) makes a “burden-of-proof” argument that the affirmative side has the burden to offer evidence about why we can be optimistic about solving the current problems of AI, such as misconceptions about what it can do for humans. Jimmy (A) pointed at one plausible way of approaching the solution, which is by clarifying misunderstandings arising from the use of anthropomorphic vocabulary when we describe AI behaviors (e.g. ChatGPT knows, understands, and so forth).
In the conclusion round, the negative side made a concluding remark first, pointing out that the commodification and availability of AI tools like ChatGPT is by no means restricted to people with good understanding of what they are, and that its wide access has already caused (and will cause more) problem in higher education by encouraging students to view and pursue knowledge in such a way that undermines the traditional value of education. The affirmative side’s concluding remark rests on the optimism that it is possible to educate the public about how to use and what to expect from AI tools, and that ultimately incorporating things like ChatGPT as a facilitator of true learning will just be a matter of time.
Here are some of my own thoughts about the topic. Having taken the affirmative side during the debate, I acknowledge that a powerful point made by the negative side is that diverging interest between educators and business people who promote AI tools like ChatGPT as commercial products. It is true that even if the educators are clear-minded about how to use ChatGPT, the widespread availability of such tools goes simply beyond what the educators envision for their own classrooms. This is especially a problem given the difficulty to detect AI-generated content, at least for now. For my own side, I have my own opinion about what is the next step to take towards conceptual clarifications of AI products, and this would be a broad question in cognitive science, involving all the way from discussions in philosophy of mind (for example, I observe that many experts in other areas are not familiar with basic arguments like Searle’s Chinese argument about AI’s lack of “understanding”) to what are the ultimate goals and purposes to design and improve artificial intelligence. I believe that a key difficulty is the deeply ingrained anthropomorphic or mentalistic vocabulary in our language, and once we start using objectively accurate terms for AI, it would clarify at least some initial misunderstandings about the nature of its output. While this might not solve all the problems, this surely is a first step.