Willow - AI Learning Platform for UK Schools
Back to blog
Are AI Models Damaging Pupils' Thinking Skills? What the Research Actually Tells Us

Are AI Models Damaging Pupils' Thinking Skills? What the Research Actually Tells Us

By Jordan Caspersz

An evidence-based look at the MIT and Carnegie Mellon studies – and what it means for your classroom

The confidence trap

Carnegie Mellon and Microsoft surveyed 319 white-collar workers who use AI regularly. The pattern? The more confident people became in AI's abilities, the less critically they engaged with outputs.

We see this sometimes. When ChatGPT gives me something that looks good and sounds good... how often do I actually check it? Question it?

The researchers found AI could "inhibit critical engagement with work and potentially lead to long-term overreliance and diminished skill for independent problem-solving."

Here's an analogy: calculators in schools. We didn't think twice about using them. But if every maths operation was outsourced to a device, would pupils understand maths? Or just be efficient at pressing buttons?

AI is the calculator problem on steroids. And unlike maths, AI deals in plausibility. Sometimes brilliant accuracy. Sometimes confident nonsense. If we're not teaching students (and ourselves) to tell the difference? That's a problem.

MIT Findings

MIT recruited 54 university students to write essays whilst monitoring their brain activity using EEG (using electrode caps).

The findings were striking. Students using ChatGPT showed significantly less activity in brain networks linked to cognitive processing. Their brains were doing measurably less work. When asked to recall what they'd written, they struggled far more than students who'd written independently.

The researchers termed this "a possible decrease in learning skills." In plain English: when the AI does the heavy lifting, the brain doesn't have to – and that's not ideal.

This research highlights a critical distinction: producing good work and actually learning aren't the same thing. Students could submit brilliant essays they barely understand. The marks might be excellent. But are they educated? Or simply proficient at prompting AI?

This matters because cognitive engagement isn't just about output quality. It's about building the mental models, conceptual frameworks, and thinking patterns that constitute genuine education rather than just successful assignment submission.

What AI should actually do in schools

This is where theory meets practice.

The evidence suggests a clear principle: the way schools implement AI matters more than whether they implement it.

Tools like Willow are designed around a fundamental insight: AI should complement teaching, not replace the cognitive work students need to do.

Consider the teaching process. When introducing a new topic – photosynthesis, the French Revolution, computational loops – the teacher's role is to build initial understanding. To explain, model, check for misconceptions, adjust in real-time based on classroom dynamics.

AI cannot replicate this. The human expertise of recognising confusion, adapting explanations, and building genuine conceptual understanding remains firmly in the teacher's domain.

But here's where AI demonstrates genuine value: after initial teaching, it can provide personalised practice that adapts to individual student levels. It can engage in conversations that check understanding without simply providing answers. It can surface insights about which students have grasped concepts and which need additional support.

This represents AI as pedagogical support rather than shortcut. AI that enhances the learning process rather than bypassing it.

Jayna Devani at OpenAI describes this as using AI "as a tutor rather than just a provider of answers." The distinction is crucial. It's the equivalent of having an always-available teaching assistant who can break down questions, scaffold thinking, and guide toward understanding – without simply completing the work.

The midnight study scenario captures this perfectly: when students struggle with a concept late at night and teachers aren't available, AI that can decompose the problem and support reasoning – rather than just outputting the answer – offers genuine educational value.

What school leaders and teachers need to understand

For classroom teachers, middle leaders, and senior teams considering AI policy, the research suggests several critical insights:

The risk is real, but not inevitable. AI can absolutely lead to cognitive decline if students use it to outsource thinking. But implemented thoughtfully, it can genuinely support learning. The outcome depends on implementation.

Context determines everything. Using AI to generate essay outlines at 2am without understanding the question? Likely harmful. Using it to self-test on vocabulary, receive feedback on reasoning, or practice explaining concepts? Potentially valuable. The activity matters, not just the tool.

Students need explicit guidance. The OUP research was unambiguous: young people want structured support on effective AI use. Simply providing access and hoping they develop good habits isn't sufficient. AI literacy requires the same explicit teaching as digital literacy or media literacy.

The key insight? Cautious doesn't mean avoiding AI entirely. It means proceeding thoughtfully, with clear frameworks, genuine curiosity about outcomes, and willingness to adjust based on evidence.

Why knowledge matters more than ever

Christine Counsell's observation resonates powerfully here: curriculum is a promise made to future teachers – a commitment about what students will need to understand the world they're entering.

AI makes that promise more urgent, not less.

If students can generate superficially impressive work without genuine understanding, education must become even more rigorous about building actual knowledge. The mental models. The conceptual frameworks. The thinking patterns that constitute genuine education rather than just the appearance of competence.

The uncomfortable truth: critical thinking cannot exist in a vacuum. AI cannot provide the intellectual foundation that makes judgement, creativity, and genuine agency possible. That still requires substantive cognitive work.

It still requires real learning.

And yes, learning is cognitively demanding. That's precisely the point. The challenge isn't a bug in the system – it's the feature that builds capable minds.

Further reading

  • MIT study on brain activity and ChatGPT use (2025)

  • Carnegie Mellon University and Microsoft study on AI and critical thinking

  • Oxford University Press survey of UK schoolchildren's AI use (October 2025)

  • Professor Wayne Holmes' research on AI and education at UCL

How is your school approaching AI implementation? The evidence base is still developing, and insights from practice are invaluable - book a call.