HomeBlog › How to Tell if AI Wrote Your Student's Work
For Teachers

How to Tell if AI Wrote Your Student's Work (And What to Do Instead)

By Leon Kopelev · February 27, 2026 · 5 min read
How to Tell if AI Wrote Your Student's Work

You've read the essay three times. Something is off. The vocabulary is too polished for a seventh grader. The structure is suspiciously clean. There are no grammatical mistakes, which is itself a red flag because this student usually has plenty.

You're pretty sure AI wrote it. But you can't prove it. And you're not alone.

AI detectors don't work (and the data proves it)

Turnitin's AI detection tool flags roughly 1 in 100 submissions as false positives. That sounds low until you realize it means falsely accusing a student of cheating every week in a large class. GPTZero, the most popular free detector, has similar accuracy problems.

A 2025 Stanford study found that AI detectors disproportionately flag non-native English speakers. Students writing in their second language produce text patterns that overlap with AI-generated content. Punishing a student for having a writing style that "looks like AI" is not a defensible position.

The fundamental problem: as language models improve, their output becomes harder to distinguish from human writing. Detection is an arms race you will always lose.

What experienced teachers actually look for

Instead of running essays through detectors, veteran teachers focus on process, not product.

1. Can the student talk about their work? Ask them to explain their thesis, their reasoning, why they chose those sources. A student who wrote their own essay can riff on it for five minutes. A student who submitted AI output will give vague, surface-level answers. This is the single most reliable signal.

2. Compare in-class writing to take-home. If a student writes at a C+ level in class but turns in A+ essays at home, that gap tells you something. Track it over a few assignments rather than flagging any single one.

3. Look for missing struggle. Real student writing has visible effort: crossed-out ideas, changed directions, imperfect transitions. AI output is uniformly smooth. The absence of rough edges is itself informative.

4. Check the thinking, not just the writing. Does the essay contain a genuinely original observation? AI synthesizes existing ideas well, but it rarely produces surprising connections. When a student writes something you haven't seen in fifteen similar papers, they probably wrote it themselves.

These signals aren't foolproof. But they're more reliable than any detector, and they don't require accusing anyone of anything.

The better question: assignments AI can't fake

Here's what's actually worth your time. Design assignments where AI use is either transparent or irrelevant.

Process-based assignments require students to show their work at multiple stages. Brainstorm, then outline, then draft, then revision. Each submitted separately. AI can help at any stage, but the student has to engage with the thinking at every step. You'll see where the ideas actually came from.

In-class writing with no devices remains the most reliable measure of what a student can actually produce. Not for every assignment. But enough to calibrate your expectations for each student.

Oral defense of written work forces real understanding. Even 60 seconds per student ("Tell me about your strongest argument and why you think it holds up") reveals who did the thinking and who didn't.

Assignments requiring personal experience ("Interview someone about X and analyze what they said") are nearly impossible to outsource meaningfully. The specificity of real conversation is hard to fabricate.

The goal isn't to ban AI. Your students will use it in every job they'll ever have. The goal is making sure they can think without it, and measuring that thinking directly rather than policing their tools.

Building the thinking skills that make the difference

There's a pattern here. Every reliable signal for authentic student work comes down to the same thing: evidence of real thinking. Can they defend their reasoning? Do they have original observations? Can they analyze, not just summarize?

These are trainable skills. And when students practice them regularly, the AI question becomes less urgent. A student who's been practicing argument evaluation will naturally write differently. They'll cite evidence critically. They'll address counterarguments. They'll notice logical gaps. These patterns emerge from real cognitive work, and they're exactly what's missing from AI-generated text.

Cogito Coach builds these skills directly. Three modules target the thinking skills you're trying to assess: questioning (Curiosity Coach), evaluating claims and evidence (Argument Analyst), and structured decision-making (Decision Coach).

Each session takes 10-15 minutes on any topic. Students pick what interests them, and the AI coaches them through increasingly sophisticated thinking. You get a report showing each student's depth of analysis, not a score on content recall.

It won't solve AI detection. Nothing will. But it builds the cognitive habits that make your assessments work, regardless of what tools your students have access to.

Leon Kopelev Tech executive. Parent. Built Cogito because no one was teaching his kids how to think.
For your classroom

Your students outsource their thinking to AI. Give them a challenge they can't shortcut.

Cogito Coach turns any topic into a critical thinking exercise. Students pick subjects they care about. You get a report showing how they think, not what they copy.

Try It With Your Class

One practical thinking tip every week.

Takes 2 minutes to read. No spam, no fluff.

← Back to Blog