How to confront the problematic use of ChatGPT in higher education
Universities in Bangladesh need to formulate policies on regulating the rampant use of AI technologies
Due to the widespread use of ChatGPT in student assignments, questions have risen about whether teachers should ban this technology from the classroom, allow its use by modifying the requirements of the assignments, or integrate it within the curriculum in higher education.
Personally, I'm against banning this technology simply because it would be futile. Students will find alternative ways to access it on their own, beyond the control of teachers and administrators. However, the issue of plagiarism remains a serious concern as students tend to simply copy the information provided by Large Language Models, such as ChatGPT, into their work. In the recent classes that I've taught at my university, this has raised serious concerns for academic integrity.
In the following paragraphs, I'll outline a few challenges facing teachers in assessing student assignments and propose a few potential solutions to confront this issue.
The first hurdle is proving with certainty that a student has used an AI tool, such as ChatGPT, to produce their writing, even partially. This is because ChatGPT reproduces answers using non-identical words and sentence structures each time a question or prompt is given.
Generally, individuals familiar with the subject matter, such as teachers, would be able to identify that a portion of the text is AI-generated. But even if a teacher catches a student using ChatGPT in an assignment, it can be extremely difficult to actually prove it.
In this regard, I agree with Professor Darren Hick, who proposed administering a verbal exam on the assignment topic for suspected students. A classroom presentation on the topic can be another alternative option. However, a caveat here is that implementing these options in large classes is difficult.
As AI programs are fed more information and get better training by programmers, it can be difficult to catch AI-generated content with each iteration. It is because sufficiently trained AI programs would be able to better mimic human writing styles. This does not mean they can't be caught.
Since AI doesn't actually understand the meaning and substance of the writing, but rather just regurgitates information it has already consumed, a trained eye can detect that something is at odds with the overall topic. ChatGPT uses flowery language and often includes repetitive information. So, a pattern usually emerges in the writing indicating its unnatural way of expressing ideas.
Students at universities also tend to use fictitious references in an effort to make their assignments more credible to the teacher. However, further checking can prove these references to be fake or nonexistent. ChatGPT-generated writing is often descriptive but lacking in both exhaustiveness and depth, and it provides "conceptual explanations" and is not capable of generating "content that requires higher-order thinking".
Therefore, as educators, we need to assign students tasks that necessitate critical reflection and analytical aptitude. Additionally, we can require students to cite sources from specific, known and trusted academic databases, such as articles in JSTOR. Both of these arguments essentially show that our system of assessment must change.
To address these challenges, educators and university administrators need to rethink their approaches. We must change the methods of assessment in written assignments by discarding outdated formats and introducing innovative approaches that are more "human-centric" and up-to-date. For example, teachers can provide essay topics that are highly specific to the subject matter, requiring personal reflection, critical thinking and comprehension of study materials instead of being overly general.
To reduce AI-generated plagiarism, another plan of action is to revert to traditional in-class handwritten assignments and in-person closed-book exams that prohibit students from using an AI tool to generate answers.
Mills (2023) provided several suggestions, such as incorporating in-class discussions for analysis, requiring students to use verifiable sources with quotations, integrating multiple sources, critiquing AI-generated content as part of assessment activities, and submitting video or audio clips discussing their essays.
In my opinion, most of the aforementioned suggestions are viable approaches to consider. We can ask students to reference lecture materials and class discussion notes in their assignments. This should work, as ChatGPT is unlikely to know what particular aspect of a topic is discussed in class. Likewise, conducting audio or video discussions would push students toward sufficiently grasping the topic before presenting them to their teachers.
Academicians who support the integration of AI tools into the curriculum argue that ChatGPT can assist students in their writing and learning process. For instance, teachers can use an AI program to generate text on a specific theme and have students critique it for accuracy, comprehensiveness, inherent biases in its response, language use and strength of argument. Afterwards, students can rewrite it by addressing those criticisms.
In the interest of full disclosure, I did ask ChatGPT how students can use it effectively to assist with their writing without plagiarising (after writing down my own thoughts separately first, of course!). It produced a few answers in a couple of seconds, but most of those were optimistic without being particularly inspiring or groundbreaking.
Here are a few examples (along with my opinion in brackets): "implement plagiarism detection tools" (but plagiarism detection programs, such as Turnitin and ZeroGPT, are currently prone to errors in catching AI-generated content), "raise awareness" and "encourage honesty and ethical behaviour" (as if telling students not to plagiarise is going to make all the difference!), "monitor and investigate" (certainly, but the burden falls entirely on teachers, and is particularly difficult for large class sizes), and "impose consequences" (sure, but for this to happen, policies should be implemented first at the institutional level).
Ultimately, I think teachers should experiment with ChatGPT for themselves to figure out its capabilities and limitations. Based on the results, they can tweak their assignment instructions to ensure that students are compelled to think for themselves and consult academic sources in their writing.
Educational institutions should implement policies for their students' responsible and ethical use of AI, particularly as preparation for written assignments. Teachers should make it clear at the outset what their expectations are from their students regarding the use of AI tools in their courses. It should be explicitly communicated to students that simply copying answers directly from ChatGPT is considered plagiarism, which is a serious academic offence.
Given the excessive focus on grading rather than learning within our education system, we need ongoing conversations around transforming the entire system of teaching and learning. As the use of AI tools in educational settings is inevitable, the University Grants Commission of Bangladesh (UGC) and university authorities should develop strategies and guidelines to prevent cheating by students using ChatGPT and other AI tools, such as Google's Bard.
Moreover, universities in Bangladesh need to formulate policies on regulating the rampant use of AI technologies in writing assignments so that they foster critical thinking among students, strengthen academic integrity, and thwart plagiarism and other emerging digital offences.
Abu Sadat Nurullah teaches at Brac University.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions and views of The Business Standard.