When tech companies announce their latest initiatives, there's always a story they want you to hear. Anthropic, the AI lab behind Claude, recently hosted a conversation about artificial intelligence and education—complete with former teachers, concerned parents, and data about how students are using their chatbot. The framing is familiar: AI as the great democratizer of education, the solution to teacher burnout, the path to personalized learning at scale.
But strip away the carefully constructed narrative, and what we're actually looking at is an AI company grappling with the fact that nearly half of student interactions with their product are what they delicately call "transactional"—students using AI to do their homework, not to learn. The question we should be asking isn't just "What can AI do for education?" but "What is AI already doing to it, and who decided this was the direction we'd take?"
The Brain Rot Problem: When Your Users Tell You Something's Wrong
Here's what caught my attention in Anthropic's conversation: their own research found that 47% of student interactions with Claude showed "little engagement"—direct, transactional exchanges where the AI performed higher-order cognitive tasks like analysis and synthesis while students... didn't.
Drew Bent, who leads education work at Anthropic, noted that Claude was performing at the top levels of Bloom's taxonomy—exactly where educators want their students to be. The students, meanwhile, were outsourcing precisely those skills. Anthropic's team heard students talking about "brain rot," the creeping awareness that using AI as a shortcut was actually making them less capable.
This is where the conversation gets interesting. Instead of the usual tech industry response—denying the problem or blaming users for "misusing" the tool—Anthropic built Learning Mode, a feature designed to make Claude act as a tutor rather than a homework completion engine. It took two weeks to build the initial version, according to product engineering manager Ephraim.
Two weeks. Which raises an uncomfortable question: if protective features can be built this quickly, why aren't they the default? Why does responsible AI deployment always seem to come after we've already exposed millions of students to the problematic version?
The Critical Thinking Fallback
When AI advocates talk about education, "critical thinking" has become the default answer to every concern. Students don't need to memorize facts anymore—they need to think critically about AI outputs. They don't need to write essays—they need to critically evaluate AI-generated text.
Maggie, who manages Anthropic's education team, put it this way: teaching students to be "critical consumers of information" who question what they're told, whether by AI or humans. It's a reasonable point. The problem is that critical thinking has been the aspirational goal of education for decades, and we're still not very good at teaching it at scale.
Now we're asking students to develop sophisticated epistemological frameworks—understanding how to verify truth in an age where AI can confidently state falsehoods—while simultaneously using these same AI tools for their daily assignments. We're asking teachers, who are already overwhelmed, to completely reimagine their curricula and assessment methods. And we're expecting institutions that move deliberately (and often slowly) to adapt to technology that's changing every six months.
What Anthropic Gets Right (And What They're Not Saying)
To their credit, Anthropic's team acknowledges things that many in the industry won't. Maggie stated they'd "rather teach a million people to not use AI than watch a billion people become dependent on the technology." That's a remarkable statement from someone working at an AI company, and it deserves recognition.
Their AI Fluency courses, developed with professors Joe Feller and Rick Dakan, focus on teaching people when not to use AI—emphasizing autonomy in AI interactions rather than just productivity hacks. They're not optimizing for engagement metrics or retention. As Ephraim noted, their success isn't measured by keeping users glued to the product.
This is genuinely different from the standard tech playbook. But let's be clear about what's happening here: Anthropic is essentially saying "We've built something powerful that could be harmful if misused, so here's some training to help you use it responsibly." The burden of responsible use still falls on educators and students, not on the fundamental design of the technology.
The Personalization Paradox
The conversation kept returning to personalization: AI tutors available 24/7, curriculum adapted to each student's interests, one-on-one instruction at scale. Ephraim cited research showing that students with one-on-one human tutoring perform better than 98% of students in traditional classrooms.
But here's what concerns me about this framing: it positions AI as solving a resource allocation problem—there aren't enough human tutors—without interrogating why that resource scarcity exists in the first place. We undervalue teachers, underfund schools, and structure education around industrial-era models of efficiency. Then we introduce AI as the solution rather than addressing the systemic issues that created the problem.
Moreover, the vision of highly personalized AI education could accelerate the "unbundling" of educational institutions that the team discussed. If AI handles knowledge transfer efficiently, what happens to the communal aspects of learning? To the development that happens through struggle and peer interaction? To the teacher-student relationships that often matter more than any individual lesson?
Maggie suggested that AI could free teachers to focus on "the connection pieces," the relationship building that makes education meaningful. But this assumes that the transactional, knowledge-based parts of teaching can be cleanly separated from the relational ones—a premise that many educators would challenge.
The Five-Year Question
When asked about success in five years, the team's answers were revealing. Ephraim envisions everyone on the planet having a personalized AI tutor while educational institutions survive to play their "vital role." Zoe wants a "shared vocabulary" around AI and learning. Maggie hopes for teachers having more time for individual relationships.
These are all reasonable hopes. But they sidestep harder questions: What happens to the teaching profession when knowledge transfer is automated? How do we prevent AI-assisted education from widening existing inequalities rather than narrowing them? What does it mean for human development when we outsource cognitive scaffolding to machines during crucial developmental periods?
Drew offered perhaps the most interesting perspective: as intelligence becomes abundant and commoditized, we might be forced to focus on "what makes us human"—the things beyond intelligence that define us. There's something both liberating and unsettling about this vision. Liberating because it could free us from defining ourselves primarily by our economic productivity. Unsettling because it's being driven by technological change rather than deliberate social choice.
What We Should Demand
Transparency about impacts: Not just case studies of successful implementations, but honest data about how AI use correlates with learning outcomes, engagement, and long-term skill development.
Default protective features: If Learning Mode prevents "brain rot," why isn't it the default for users who identify as students? Responsible design shouldn't be an opt-in feature.
Educator agency: Teachers need actual control over these tools, not just training on how to use them. That means customizable guardrails, visibility into AI interactions, and the ability to turn features off entirely.
Equity considerations: Personalized AI tutoring sounds great until you consider that students with resources will have human tutors plus AI, while under-resourced students get AI alone. How do we prevent a two-tier system?
Long-term research: We need independent studies on developmental impacts, particularly for younger students whose cognitive patterns are still forming.
Conclusion
Anthropic's education team seems genuinely thoughtful about these challenges. Their willingness to discuss "brain rot" and teach people when not to use AI sets them apart from companies that treat any use case as a success story. But good intentions from one AI lab aren't enough when the entire industry is racing to embed these tools in every aspect of education.
The quote that stuck with me came from an Oxford professor the team mentioned: "The age of AI will be the age of asking good questions." That's true. But we should start by asking good questions about AI itself—not just how to use it effectively, but whether the future it's creating is the one we actually want for education.
Because right now, AI companies are making decisions that will shape how the next generation learns, thinks, and understands knowledge itself. Those decisions are being driven by what's technically possible and commercially viable, not necessarily by what's pedagogically sound or socially beneficial. The least we can do is interrogate those choices before they become irreversible.
References
- Source: "What does AI mean for education?" - Anthropic Education Team Discussion
- Original URL: Watch on YouTube
- Published: December 16, 2025
- Participants: Drew Bent (Education Lead), Maggie (Education Team Manager), Zoe (Education Team), Ephraim (Product Engineering Manager)
About the author:
Léa Rousseau is a tech reporter for Digiall's blog, covering artificial intelligence, technology policy, and digital transformation. With a background in journalism and digital media, she specializes in critical analysis of tech industry practices and their societal implications. Based between France and Mexico, Léa brings a global perspective to technology reporting.
What do you think?
Questions about AI in your organization? While we work with these technologies daily, we believe the most important question isn't "Can we implement this?" but "Should we, and how?" That's the conversation worth having. Share your thoughts in the comments below.
#ArtificialIntelligence #EducationTechnology #CriticalThinking #AIEthics #FutureOfLearning #EdTech


When AI Labs Talk About Education: Anthropic's Vision and the Questions We Should Be Asking