Debate Report
The debate about the future of education in an AI-driven world that we had on October 25, 2023 provided a comprehensive and in-depth exploration of the potential and challenges of Large Language Models such as ChatGPT. The participants discussed the topic thoroughly, showing respect for different opinions and effectively expanding on their peers’ points.
The affirmative team of ChatGPT in higher education highlights the potential of AI in reshaping the future of education. Specifically, Yushu highlighted the transformative power of AI, particularly GPT-4, in expanding the logic behind questions. She referenced Abbott’s 2016 talk, suggesting that the landscape of education has significantly evolved since then. Leo, building on this, emphasized the democratization of knowledge, pointing out how platforms like Coursera have revolutionized people’s access to education. He argued for a harmonious blend of traditional and modern methods, suggesting that the future lies in integrating the best of both worlds. Elizabella, Jiamin, and others further highlighted the practical benefits of AI in democratizing education, from idea generation and answering questions to handling tedious tasks, underscoring ChatGPT’s role as a valuable educational ally because it could provide a wealth of resources that transcend traditional boundaries.
The affirmative team provided many examples of tangible, day-to-day benefits of AI, suggesting its potential to be more than just a tool but a central figure in the educational process. These practical benefits, such as handling tedious tasks and providing instant resources, are undeniable. However, there is a philosophical question here about the nature of learning. Just as Andrew Abbott emphasized in his speech, discursive reasoning and associative knowing are fundamental to the process of knowing. If AI is handling the “mundane” aspects of education, are students being deprived of critical experiences that will help them build discursive reasoning and associative knowing? Abbott also argued that some tasks may seem mundane and boring but, in fact, very valuable and important, and those tasks should not be skipped or outsourced to automated machines.
Thus, I think the debate would have been more enlightening if the affirmative team could further discuss how ChatGPT, a publicly available profit-driven LLM product, can properly address individual students’ definition of a “mundane” task during their learning processes. In other words, what might be routine for one student could be a critical learning point for another. Andrew Ng’s recent speech on the opportunities in AI highlighted the potential of domain-specific individualized AI. My understanding of his point is that AI in the future might be like the electricity of today – it can power both small bulbs and large machines, adapting seamlessly to the specific needs and scales of various applications. The demand for AI tools in education underscores satisfying individualized learning needs by discerning what constitutes a “mundane” task for different students. This ability of AI to personalize and adapt to each student’s unique learning journey is not just an educational boon but also represents untapped market potential.
The non-affirmative team, while acknowledging the potential of AI, raised concerns about its unchecked adoption. Will voiced concerns about over-reliance on AI tools like ChatGPT, suggesting that it could dilute the core objectives of higher education if AI tools become crutches rather than aids. Alex’s introduction of “cognitive offloading,” warning of potential declines in cognitive abilities with excessive dependence on AI, and Luke’s emphasis on the potential harm to critical reading skills both touched upon the cognitive implications of AI’s unchecked integration. Their arguments suggested that while AI might offer convenience, it could come at the cost of essential academic skills. Kelly emphasized that higher education’s primary goal is to boost genuine knowledge and understanding. They argued that LLMs like ChatGPT mainly reflect biased common opinions. Kelly’s cautionary note about the commercial nature of many AI tools and the risk of anthropomorphism provided a sobering perspective on the debate.
Overall, I found myself more impressed by the non-affirmative team because of the depth of their original arguments and their neat real-time responses to the affirmative team’s assertions. They went beyond shallowly discussing use cases of AI in education and introspected on the fundamental questions: What is the true essence of higher education? What objectives and ideals should it uphold? Ideally, AI tools should serve as learning aids instead of replacing students’ learning efforts (that is, students should remain active participants in their learning journey). However, this brings us a critical concern: at what point does the line between assistance and dependency blur? If students consistently over-rely on AI tools for immediate answers, just as Andrew Abbott’s critique on students over-rely on search engines to look up information, they might bypass the critical thinking process integral to genuine knowing and simply become consumers of knowledge in a digital age, which ultimately will diminish the depth and richness of the educational experience.