A teacher posted recently in an ELA teachers community asking a question I see a version of every week. They had developed an eye for spotting AI-generated writing. They were tired of being reactive. They wanted to design assignments where AI wouldn't be useful in the first place. They wanted to force students to do their own thinking.
The replies under the post were what you'd expect. Paper assignments. Timed in-class essays with surprise prompts. Scavenger-hunt activities AI can't easily fake. Highlighting requirements. The most upvoted reply was, essentially: do everything in pencil, in front of you, with no warning.
These are smart, exhausted teachers trying to do the right thing. I want to be careful here, because I taught for sixteen years and I've sat in faculty rooms where this kind of resistance was the only sane move. The cheating problem they're describing is real. Their classroom redesigns are reasonable.
I just don't think they're going to win.
We've been here before. Twice.
The first version was the graphing calculator. In the 1990s and early 2000s, math departments seriously debated whether the TI-83 was destroying student thinking. There were good-faith teachers who refused to let students use them, who designed exams that punished calculator use, who argued (correctly, in a narrow technical sense) that students "needed to be able to do it without the tool." That position lost. Not because the resisters were wrong about the cognitive cost. They were sometimes right about the cognitive cost. They lost because the tool became universal, because the next grade level expected students to know it, and because the AP exams started requiring it. By the time I started teaching in 2009, the calculator argument was effectively done, and the kids who'd been kept away from the tool were the ones playing catch-up.
The second version was 1:1 iPads, and that one I watched in real time. I started teaching in 2009, right as the rollout was beginning. Around 2012, every district in the country had to decide whether to put a screen in front of every kid. The teachers who resisted (many of them veterans who'd watched what happened to attention spans the year YouTube launched) argued that we were destroying focus, attention, the ability to read a page without a notification. They were right about all of that. They lost anyway.
Both times, the teachers who lost the argument were not wrong about the cost. They were wrong about the strategy. Trying to make the tool unavailable in your classroom while it became available everywhere else was a one-room defense against a generational shift. The kids who learned to use the tool well left their peers behind.
AI is the same panic, on a faster timeline.
That kind of post is the calculator-resistance era of AI. Paper assignments and surprise prompts are the ELA-classroom version of "no calculators on the test." It works locally. It doesn't generalize. And the kids who graduate from a classroom that treated AI as something to be resisted will land in a workplace, a college, a society where AI is the assumed default. The ones who learned to use it well will be ahead. The ones who were taught to fight it will spend the first six months of college figuring out what their classmates already know.
Here is what makes this concrete for me right now. A friend of mine teaches college-level English. His department spent over fifty thousand dollars this year on an AI-detection tool. Fifty thousand dollars on software meant to identify a thing the software cannot reliably identify. In the same year, the faculty got essentially zero professional development on how to teach with AI. None on prompt iteration. None on assignment redesign. None on what students are actually doing with the tool. Just the detection license.
Run the math on that allocation and what you find is the math of a panic, not a strategy. Fifty thousand dollars spent on PD would have produced teachers who could teach the tool. Spent on detection, it produced an arms race they were already losing on the day they signed the contract.
Here is the unpopular part.
We need to stop asking whether students are using AI. They are. That ship sailed. The question every assignment has to answer now is: what cognitive work am I actually testing, and is AI doing it for them?
Most of what teachers I talk to are mad about (and I was, when I was still teaching) is that AI is doing the Remember and Understand layers of Bloom's. Recall. Definition. Plot summary. The bottom of the cognitive pyramid. And, honestly: that work was always the easiest part of teaching to assign, the easiest to grade, and the part students cared least about. AI is automating away the layer of ELA we already privately knew didn't matter that much.
What AI cannot do is replace the student's actual prompt direction. The take a student decides to push the model toward. The argument they iterate on. The voice they edit it into. The places they reject the model's first answer because it didn't capture what they actually meant. Those are Analyze, Evaluate, Create work. They're harder. They're more interesting. They're what we said for thirty years we wanted students doing.
The AI doesn't think for them. It narrates and amplifies their direction. The student is still responsible for the prompt's logic and the output's voice. Watch a sixteen-year-old work with the tool for ten minutes and you'll see that, when they're doing it well, they reject more than they keep, they push it harder than its first answer, and they edit the voice line by line. That's literacy work. We just haven't called it that yet.
So what do we actually teach?
I think prompt iteration is a literacy. The skill of asking, evaluating the answer, asking better, evaluating again. Not unlike revising your thesis after seeing the evidence.
I think voice editing is a literacy. Recognizing that the AI's first draft is in nobody's voice and reshaping it until it's in yours. Not unlike imitating Hemingway in week three of stylistic analysis.
I think healthy AI use is a literacy. When to use it. When not to. When to push back on what it gave you. When to throw it out and start with a blank page because the tool is making you lazier than you'd be without it.
None of this fits cleanly into the assignments most ELA teachers were trained to give. It does fit cleanly into the assignments we'd give if we accepted the tool was here and the actual question was what to do with it.
I'll tell you what finally changed for me. In my last years of teaching, I spent hours fighting Google Doc revision history. Looking for the copy-paste timestamps. Looking for gaps in the keystroke log. Looking for the unbroken paragraphs that no real student writes in one sitting. Hours. For what? To force students to find another method to trick me?
At some point I stopped. I started sandwiching in time during class to talk about what AI actually is, what it does, where it fails us as individual thinkers, and how to prompt it well. That is what we should have been doing all along.
I'm not the only teacher who pivoted late. Most of us did.
I am not in the classroom anymore. I'm building tools that try to make student thinking visible at the prompt-and-iteration level, not just the final-output level, because I think that's where the actual work is now. I might be wrong about all of this. The veteran teachers warning about it might be right that the tool is corrosive in ways my analogy misses.
But history has been kind to teachers who taught the new tool instead of fighting it. I'd bet on that pattern again.
Tools, demos, and what I'm working on now: readbookpulse.com