Did I write this, or did AI? Students around the country have been impacted by this question. With the rise of AI writer bots such as Chat GPT, Google Gemini, Microsoft Copilot, Perplexity AI, Jasper Chat, and many more, high school and college students grapple with a new dilemma. While these tools can enhance creativity and streamline the writing process, their misuse–and the following use of AI detection software–has created an atmosphere of distrust in educational institutions.
AI checkers, like Turnitin’s AI detector tool, are being rapidly implemented without adequate study of their accuracy. According to a study conducted by Stanford University, Turnitin’s AI checker was found to falsely identify 38% of human-written essays as AI-generated, particularly against non-native English speakers with a whopping 61% inaccuracy. For students who spend hours crafting thoughtful essays, or writers who are more advanced than their grade level, this level of inaccuracy can have devastating consequences, such as accusations of academic dishonesty and academic integrity violations. The situation becomes even more disconcerting when students lack the resources to disprove these claims, leaving their academic integrity at the mercy of an algorithm.
To put these AI detectors to the test, I conducted my own experiment. I generated a full essay using the most common AI detector, Chat GPT, and then put it through all of the free AI checkers that I could find on the first and second pages of Google. Most of the AI checkers were able to recognize 100% AI generated content, however some of them read at lower numbers like 50% and even 0%. Then, I used another AI tool: a humanizer. This is just an AI bot that is specifically made to write like a human; making slight sentence inconsistency and grammatical errors to pass a detector. After I humanized my writing three or four times, I beat the AI detectors. After a few clicks of a button and none of my original work, the AI checkers all read at 0%, even Turnitin.
In my opinion, the heavy reliance on AI checkers undermines trust between students and teachers. These tools are very vague, and do not provide clear explanations for their judgements. To be told that the essay that you spent time perfecting was “likely written by AI” is demoralizing and infuriating. In particular, Turnitin has drawn sharp criticism from universities such as Vanderbilt, Michigan State, and the University of Texas at Austin for its inability to distinguish between individual writing styles, especially from students with unique and creative voices .
Rather than fostering suspicion, schools and universities should focus on teaching ethical AI usage and promoting academic integrity. By rushing to adopt obviously flawed AI detection tools, institutions are punishing creative students and destroying trust—an ironic outcome in a system meant to nurture learning and critical thinking.
So, did I really even write this article? You be the detector. Four sentences in this article are generated by AI. Can you tell? Trust the writer, not the algorithm.