In a case that has garnered significant attention, Haishan Yang, a 33-year-old doctoral student at the University of Minnesota, was expelled in November 2024 for allegedly using artificial intelligence (AI) to complete a preliminary exam. Yang, who was pursuing his second Ph.D. in Health Services Research, Policy, and Administration, denies the allegations and has initiated legal action against the university.
The Allegations
In August 2024, while traveling in Morocco, Yang took a remote preliminary exam consisting of three essay questions. The exam guidelines explicitly prohibited the use of AI tools. After submitting his responses, Yang was informed that he had failed the exam due to suspicions that he had used AI assistance.
The faculty members who graded Yang's exam expressed "significant concerns" about the authenticity of his responses. They noted that certain answers seemed inconsistent with his previous work and included concepts not covered in the coursework. To investigate further, the professors inputted the exam questions into ChatGPT and found that some of Yang's answers bore structural and linguistic similarities to those generated by the AI.
Previous Incident
This was not the first time Yang faced allegations related to AI usage. Approximately a year earlier, he was accused of using AI to complete a homework assignment. In that instance, a professor discovered a note within Yang's submission that read, "Re write it (sic), make it more casual, like a foreign student write but no ai." Yang admitted to using AI to check his English but denied using it to generate the content of his assignment. The professor ultimately dropped the allegation, and Yang received a warning from the university.
The Expulsion and Legal Action
Following the allegations from the preliminary exam, the university conducted a student conduct review hearing. Yang was expelled based on the findings of this hearing. In response, he filed a federal lawsuit against the University of Minnesota, alleging violations of due process. Yang contends that the disciplinary process was flawed, citing procedural errors, reliance on altered evidence, and inadequate notice and opportunity to respond.
Yang's legal action has sparked discussions about the challenges universities face in addressing academic integrity in the age of AI. The case underscores the complexities of detecting AI-assisted cheating and the potential consequences for students accused of such misconduct.
Implications for Academic Integrity
The rise of AI tools like ChatGPT has introduced new challenges for academic institutions. Educators are grappling with how to detect AI-generated content and establish fair policies to maintain academic integrity. This case highlights the need for clear guidelines and robust detection methods to address AI-related academic dishonesty.
As the legal proceedings continue, the outcome of this case may set a precedent for how universities handle allegations of AI-assisted cheating and the rights of students accused of such offenses.


Comments
Post a Comment