Sunday, March 1, 2026
Your Cheatin’ Heart & Academic Writing Assignments
Your Cheatin’ Heart & Academic Writing Assignments
“Google has AI embedded into it, Microsoft has AI embedded into it — like literally everything has AI in it… So, in a roundabout way, there’s no way to write a paper without using AI, unless you go to the library and you check books out and use encyclopedias.”
Marley Stevens, a recent grad of the University of North Georgia, who battled false charges of AI use.
In ye olde days, rote learning was probably the best level of education any Americans ever received. Public education was not the rule, and back in 1635, Boston built the first public school. TV Westerns are filled with “school marms,” often the love interest of the white Stetson-hatted heroes, but the point illustrates how pervasive public education and functional literacy moved us to a level of modern education that embraces even advanced math and science as part of a basic public education. Rote learning is no longer even discussed. We have social agendas and censorship as prevailing topics, but there is a real question for those students in high school and beyond whether their assignments are truly their own or some version of an easily accessible product of artificial intelligence.
While fake images, and often sexually explicit, have made their way onto mainstream social media sites (like Grok) and sometimes become extreme examples of what was intended as teenage mischief crossing into criminal activity. Even political fakery has become pervasive creating a perplexing challenge in education. We want them to be the digital natives they already are and to be able to function in a world where AI has to be part of their academic tool set, but we also want them to learn. Much like plagiarizing is grounds for academic discipline, or at least a failing grade, substituting genuine research, writing and math skills with AI-generated substitutes seems comparable.
So, as students have embraced these AI shortcuts, teachers and professors have deployed AI detection tools to combat the trend. But just as the military reacts to new weapons being developed by our foes, there is more than a little back and forth in academic as Tyler Kingkade, writing for the January 28th NBC News, points out: “Rapid adoption of AI by young people set off waves of anxiety that students could cheat their way through college, leading many professors to run papers through online AI detectors that inspect whether students used large language models to write their work for them. Some colleges say they’ve caught hundreds of students cheating this way.
“However, since their debut a few years ago, AI detectors have repeatedly been criticized as unreliable and more likely to flag non-native English speakers on suspicion of plagiarism. And a growing number of college students also say their work has been falsely flagged as written by AI — several have filed lawsuits against universities over the emotional distress and punishments they say they faced as a result… NBC News spoke to ten students and faculty who described being caught in the middle of an escalating war of AI tools.
“Amid accusations of AI cheating, some students are turning to a new group of generative AI tools called ‘humanizers.’ The tools scan essays and suggest ways to alter text so they aren’t read as having been created by AI. Some are free, while others cost around $20 a month… Some users of the humanizer tools rely on them to avoid detection of cheating, while others say they don’t use AI at all in their work, but want to ensure they aren’t falsely accused of AI-use by AI-detector programs.
“In response, and as chatbots continue to advance, companies such as Turnitin and GPTZero have upgraded their AI detection software, aiming to catch writing that’s gone through a humanizer. They also launched applications that students can use to track their browser activity or writing history so they can prove they wrote the material, though some humanizers can type out text that a user wants to copy and paste in case a student’s keystrokes are tracked.
“‘Students now are trying to prove that they’re human, even though they might have never touched AI ever,’ said Erin Ramirez, an associate professor of education at California State University, Monterey Bay. ‘So where are we? We’re just in a spiral that will never end.’
“The competition between AI detectors and writing assistance programs has been propelled by a heightened anxiety about cheating on college campuses. It shows how inescapable AI has become at universities, even for students who don’t want to use it and for faculty who wish they didn’t have to police it… ‘If we write properly, we get accused of being AI — it’s absolutely ridiculous,’ said Aldan Creo, a graduate student from Spain who studies AI detection at University of California San Diego. ‘Long term, I think it’s going to be a big problem.’”
To police AI cheating would require a serious escalation in source material tracking, a really huge privacy problem, often with spotty results as Kingkade noted: “Kelsey Auman, who graduated last spring, started the petition after she fought to prove she did not use AI on several of her assignments. She knew enough classmates with similar experiences that they had a group chat named ‘Academic Felons for Life.’ Auman said she started to run her papers through multiple AI detectors on her own before turning them in, hoping to avoid another dispute, but it created more anxiety when they also incorrectly flagged things she wrote as generated by a chatbot.”
But if national leadership is setting a bad example for the use of AI falsification, it seems hard to fault students mimicking the political world around them. Kaitlyn Huamani, writing for the January 28th Associated Press, points out: “Ramesh Srinivasan, a professor at UCLA and the host of the ‘Utopias’ podcast, said many people are now questioning where they can turn to for ‘trustable information… AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence,’ he said.
“Srinivasan said he thinks the White House and other officials sharing AI-generated content not only invites everyday people to continue to post similar content but also grants permission to others who are in positions of credibility and power, such as policymakers, to share unlabeled synthetic content. He added that given that social media platforms tend to ‘algorithmically privilege’ extreme and conspiratorial content, ‘we’ve got a big, big set of challenges on our hands.’… There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces.” Embrace AI anyway? Some Academics are finding ways to incorporate honest AI use into class assignments, while others seem to have just given up.
I’m Peter Dekom, and if AI cannot be contained within reasonable guidelines, a declining quality in our high school, college or graduate/professional school grads will infect the entire nation and its competitive future.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment