Raising AI Kids: Issue 9
The Homework Trap: Teaching Kids to Use AI, Not Cheat With It
Editor's note: This issue focuses on the honesty and trust side of AI in homework — what happens when a parent catches their kid using AI to shortcut real thinking. For the practical framework on how to edit and verify AI output productively, see Issue 5.
David stared at his phone, then back at his laptop screen. Something about Sam's history essay didn't feel right. It was polished—maybe too polished. The transitions were smooth, the vocabulary oddly sophisticated for a seventh grader, and there was this particular turn of phrase about "the multifaceted nature of Reconstruction" that definitely didn't come from his kid's brain.
He opened ChatGPT and copy-pasted a paragraph. Yep. Sam had asked an AI to write the essay.
For a moment, David felt that familiar parental mix of disappointment and frustration. But then something shifted. Instead of just being upset about academic dishonesty, he realized the real problem: he had no idea what homework was actually for anymore—and neither did Sam.
That's the question we need to sit with. Because the homework trap isn't really about AI. It's about what we're actually trying to teach.
What Are We Really Teaching?
For a long time, homework was straightforward. Do the work. Show your understanding. Get a grade. The process was the point. You memorized facts, practiced skills, and proved you could do the thing.
But what happens when a kid can just ask an AI to do the thing?
Most parents panic. "That's not learning!" they say. And they're not wrong. But here's the subtle part: if learning was just about getting the right answer, then yeah, AI-written homework is pointless cheating.
But if learning is actually about thinking—about how to approach a problem, where to find information, how to check your own work, what doesn't make sense—then AI becomes something else entirely. It can be a thinking partner instead of an answer machine.
The difference changes everything.
The Thinking Partner vs. The Answer Machine
Here's what it looks like when AI becomes a thinking partner:
Sam is stuck on a math problem. "If x + 4 = 9, what's x?" She asks Bit (that's her AI buddy) not "what is x?" but "can you explain what it means to isolate x?" Bit walks her through the concept. Sam tries again. She gets it wrong. She asks Bit why. They talk through the logic. Bit doesn't just tell her the answer—it asks her questions. "What happens when you add 4 to both sides?" By the end, Sam understands isolation. She can do the next problem herself.
That's a thinking partner.
Now the trap version: Sam stares at the problem for thirty seconds, then asks Bit, "Just solve this for me." Bit gives her the answer. She copies it down. She's done. She hasn't learned isolation. She hasn't learned how to think about the problem. She's just learned how to outsource thinking.
That's an answer machine. And it's not her fault—it's the tool being used as a shortcut instead of a tool.
A Real Example
Let's say Sam has to write a persuasive essay on whether social media is good or bad for teenagers. Here's the trap approach: she asks an AI to "write me a 500-word essay on why social media is bad." The AI churns out something coherent, well-argued, and completely devoid of Sam's actual thinking. It's AI-written homework. Grade it, done, nothing learned.
Here's the thinking partner approach: Sam asks Bit, "I think social media is bad, but I'm not sure what my strongest argument is." Bit asks her some questions. What do you mean by 'bad'? What have you actually observed? Sam talks through some ideas—she's noticed her friends get anxious when they compare themselves to influencers. She's also noticed that social media helped her connect with people who share her niche hobby.
Bit helps her organize these thoughts. Maybe her real argument isn't "social media is bad," but "social media is powerful and we're not teaching kids how to use it." Sam writes the essay herself, but with actual thinking underneath it. The essay reflects her ideas, bounced off a smarter partner.
She learned how to think. She also wrote something that matters to her.
The Conversation to Have
David sat down with Sam. He didn't lead with accusations. He said, "I noticed something in your essay that made me curious. The phrasing felt different from how you normally write. Can we talk about how you wrote it?"
Sam got defensive at first. But David stayed calm. "I'm not trying to get you in trouble. I'm trying to understand what homework is actually for now that AI exists."
Here's what he said—and you can use this almost word-for-word:
"But what if I use AI to help?" Sam asked.
✅ Do Now: The Homework Protocol
This week, sit down with your kid and make a three-step agreement. Write it down and put it somewhere visible (fridge, shared doc, whatever works for your family).
- Try first. Before asking AI anything, spend 10–15 minutes actually trying to do the work. Read the problem. Take notes. Get confused. That's part of thinking.
- Ask for thinking help, not answers. If you're stuck, you can ask AI a question about the process. "How do I start approaching this?" or "What am I missing here?" Not: "Just do it for me."
- Check your own work. After you finish, use AI to help you review. Does this make sense? What's wrong here? Did I miss anything? But you fix it. You do the learning loop.
That's it. Three steps. Make it a rule, not a lecture. When your kid uses AI on homework, ask: "Did you follow the protocol?"
What's Next
Next issue: when AI sounds completely confident but is completely wrong — and how to teach your kid to catch it before it matters. It's called hallucination, and once you see it, you can't unsee it.
Until then—watch for the homework traps in your own house.