Is it Acceptable for Students to Use AI in Crafting Arguments?
The past few years have seen a massive boom in artificial intelligence technology, and with it, the use of ChatGPT as a tool for crafting debate speeches. And, indeed, the allure of employing AI tools like ChatGPT for constructing arguments is undeniable. They offer a seemingly effortless pathway to generating content. The ethical arguments for and against ChatGPT are multitudinous— is it cheating? Is it plagiarism? Or is it simply a tool like any other?
However, the primary reason caution is advised when turning to ChatGPT is not rooted in ethics, but in practicality and the essence of what makes a debate truly compelling. The primary concern lies in the nature of AI-generated content. AI, including ChatGPT, operates on the principle of identifying the most statistically probable responses based on extensive data. For example, it has read billions of words of text, and so knows that if you say “peanut butter and…” the statistical next word to output is “jelly”. It essentially does exactly this on a larger scale, meaning its outputs are safe, straightforward and conventional. This approach, while logical, often lacks the depth, creativity, and unique perspective that help win debates.
Take, for example, the Zagreb EUDC 2022 debate tournament. The winning team was Opening Opposition for the motion, This House Hopes intelligent extraterrestrial life exists,”. If you were to ask ChatGPT to craft some opposition arguments, they would mainly revolve around things like the risk of dangerous alien life to humans— in other words, a perfectly coherent argument, but also a very basic one. The OO team, however, won by doing just the opposite. Instead of following predictable lines of reasoning, they chose an unconventional path. They used scientific data to argue the ethical implications of desiring the existence of more sentient beings, who would inevitably engage in brutal struggles for survival. Essentially, there are many more gazelles than lions, and extraterrestrial life would likely follow this trend was well. This argument was not just a logical stance but also emotionally resonant. It was an argument that AI, in its current state, would be unlikely to generate due to its inherent programming to favour more obvious, statistically-backed arguments.
Furthermore, the reliance on AI for debate preparation raises another significant issue: the lost opportunity for personal growth and skill development. Crafting an argument is not just about presenting a point, but rather a process of intellectual exploration, self-reflection, and personal growth. The journey of researching, formulating, and refining an argument enhances critical thinking and creativity. It fosters a deeper understanding of the topic and hones the ability to view issues from multiple angles. In contrast, using AI primarily develops skills in AI prompt engineering, which, while useful, does not offer the same depth of intellectual engagement.
The practical downside of AI reliance is also evident in the debate arena. AI-generated arguments, due to their predictable nature and tendency toward specific words (“crucial” is a clear AI tell at this point), are often easily identifiable and might lack the persuasive power of a well-crafted, human-developed argument. Debates thrive on the unique insights, experiences, and perspectives that individuals bring to the table. These elements add depth and authenticity to the discourse, making the arguments more compelling and impactful.
Ultimately, while there are ethical arguments for and against ChatGPT in the debate classroom, the primary reasons to avoid AI use in debate aren’t ethical; they’re practical. AI can easily craft coherent arguments, but its tendency toward the safe and predictable means it will very rarely craft winning ones. To evolve as a debater, one needs to embrace the challenging, uncomfortable process of developing arguments independently.