Spotting when students use artificial intelligence on their assignments has become a strange new skill for professors like Leo Goldsmith, but actually proving it is nearly impossible.
Goldsmith, who teaches screen studies at the New School, says that it’s obvious when a student’s work wasn’t actually written by them, but the burden of proof is frustratingly heavy. Many professors are tired of acting like detectives, and some simply choose not to pursue cases unless the evidence falls right in their laps.
Educators all over the country are feeling the pressure as more students turn to AI software for quick fixes. The technology’s outpaced the precautions, leaving most universities and professors scrambling without reliable ways to catch, or even define, what counts as cheating.
Patty Machelor at the University of Arizona was stunned when she read an honors student’s submission that felt off — it sounded nothing like the student’s own voice and missed the point of the assignment entirely. Reading the piece to her husband, the answer clicked: this was written by a machine. She gave the student another shot, but even the rewrite showed clear signs of AI’s fingerprints, including leftover prompts from the tool itself.
Irene McKisson, also at Arizona, notices a clear divide: in-person classes rarely have problems, but online courses are swamped with AI work. She describes the trend as a spreading “disease” in her virtual classroom, one that seems to explode from a trickle into an outbreak.
Why AI Cheating Slips Through the Cracks
Though various tools claim to detect machine-written work, even their creators admit their results are shaky at best. Research shows they flag non-native English speakers more often and return inconsistent verdicts, making them unreliable for high-stakes decisions.
While some professors add clear no-AI policies in their syllabi, those policies are mostly self-crafted since schools provide little formal training or guidance. McKisson found her own solution by scavenging advice from professor groups on Reddit, realizing that using strict grading rubrics might be the most effective deterrent available right now.
It’s not always clear to students or teachers where the line is. Is it cheating to use AI to brainstorm? What about summaries and spelling corrections? Many colleges where policies exist leave it up to individual departments or even professors to decide, which only adds to the chaos.
Academic pressure seems to be one of the biggest reasons students justify using AI, especially when assignments and deadlines pile up. For some, generative AI just feels like a handy productivity tool — quicker than writing an essay themselves and with very little risk of getting caught.
Sarina Alavi, a psychology PhD student, admits using AI for quick reading summaries or brainstorming transitions, but she rewrites everything to make sure it sounds like her. She says that while AI helps her organize her thoughts, relying on it for real writing would cheapen the experience and disrespect the effort her professors put into assignments.
Professors like Goldsmith argue that the real harm isn’t just breaking the rules. Losing the chance to practice reading, writing, and critical discussion is throwing away the entire point of an education.
McKisson has started flipping the script, using AI herself to test discussion questions and create assignments that AI has a harder time faking. Now she asks for more screenshots and cross-verifiable evidence, forcing students to go beyond what a chatbot can produce.
There’s no simple fix on the horizon. Teachers and administrators are improvising solutions, but everyone’s just hoping to keep pace as the technology races forward.