Observers have started to craft different narratives about where artificial intelligence (AI) will lead us, but we can’t know how the story will end. All we know for certain is that AI is rapidly transforming our world, from the classroom to the lab, the boardroom to the marketplace—and we humans aren’t ready.
That’s especially true of our schools of education. A recent CRPE brief found that few education schools currently offer coursework or training to teachers on AI. When ed schools do offer such training, it’s often focused on handling student plagiarism rather than AI’s broader implications, instructional possibilities, or ethical implications. Ed school leaders cite faculty resistance or indifference as impediments to the preparation of future teachers.
As a professor in the Mary Lou Fulton College for Teaching and Learning Innovation (MLFC) at Arizona State University (ASU), I’ve been teaching courses for university students — such as one on Human Creativity and AI in Education—and creating professional development for practicing educators while wrestling with two questions in particular:
- How can we ensure these tools don’t reinforce long-standing biases and inequities?
- How can we guard against our innate tendency to anthropomorphize these systems, recognizing that doing so may leave us susceptible to various forms of influence or manipulation?
Companies that are hell-bent on being first to market have neither the time nor the interest in asking such questions. They are too busy raising the billions of dollars needed to power these innovations. Plus, they’re struggling just to get the basics right, like accurately reporting how to watch a lunar eclipse or solving simple math problems.
Instead, higher ed must help prepare teachers and principals to maximize the promise and minimize the risk of AI in education. Ed schools are uniquely positioned to address questions such as these and, in so doing, live up to our promise of intellectual rigor and innovative leadership.
First, it’s essential to recognize what AI is. I’ve previously described it as a smart, drunk intern, a BS artist, and WEIRD (white, educated, industrialized, rich, and democratic). It is an alien intelligence trained on immense amounts of data that, despite its volume, is often narrowly focused on certain countries, demographics, and languages. This AI requires a certain type of thinking and interaction that we are all trying to learn, even as it constantly changes.
Critics complain that generative AI sometimes hallucinates. Not quite. The reality: AI is a hallucination machine. Generative is in its name, and that is precisely what it is. It generates text, images, and more based on statistical probabilities and patterns it has “learned” from its training. It has no understanding of what these mean or their connection to the real world. When its outputs align with our shared beliefs, we are impressed; when they don’t, we call them hallucinations.
This hallucinogenic AI machine is prone to bias, and not just the explicit kind, such as giving a lower grade on an essay written by a Hispanic student than by a white student. That’s too obvious, and AI companies have created guardrails to prevent it from happening. Instead, the biases are implicit, invisible, and hence far more insidious.
For example, one experiment found that while scores didn’t differ significantly for students labeled “Black” or “White,” there were substantial gaps between students labeled as low-income or attending public schools versus affluent students at elite private schools. Another experiment found significant differences if the essay changed just one word; essays that mentioned a preference for rap music scored lower and received less complex feedback than those that mentioned classical music.
These results must give us pause, particularly because we are also observing a significant increase in the use of AI by teachers and other educators. As Melissa Warr and her colleagues point out, the indiscriminate use of these tools risks exacerbating achievement gaps and harming students from marginalized communities.
The dangers of anthropomorphizing are just as troublesome. We are hardwired to see human intentionality in everything. For example, way back in the 1940s, in a classic study, two psychologists (Heider and Simmel) created an animated video featuring only lines, triangles, and circles. 33 out of the 34 viewers saw a story of pursuit, conflict, or romance. We are innately predisposed to anthropomorphize and see purposeful behavior in even the most abstract stimuli—and now we live in a world of AI.
These technologies can produce coherent, contextually relevant responses and engage in natural language conversations across diverse topics. They can simulate creativity in writing or art, mimic human language patterns, and adapt to user preferences. Further, they can remember context, display apparent knowledge, express uncertainty, and admit mistakes. Increasingly, they are multimodal—they can listen and speak.
As we interact with increasingly sophisticated AI, we open ourselves to unprecedented levels of manipulation. Our most private conversations can become data points for targeted advertising and other forms of influence. The socio-emotional development of younger generations growing up with these technologies may veer into uncharted territory. Our ethical frameworks, designed for human-to-human interaction, become muddied as we grapple with treating non-sentient entities as moral agents.
So, how can higher education help K-12 educators address challenges such as these? We, and everyone else, failed to foresee the many negative consequences of social media— polarization, isolation, and ubiquitous scams. Let’s not make that mistake again. Our goal must be to play both the short-term and the long-term game.
In the short term, schools of education should be doing everything they can to encourage their faculty to experiment with these tools so that they can Sherpa the rest of us in this brave new world. At the Mary Lou Fulton Teachers College, for example, we have taken a systems approach to AI integration—creating a pool of OpenAI licenses, taking advantage of ASU’s groundbreaking partnership with OpenAI. Faculty, doctoral students, and staff can all access these licenses. Instead of top-down mandates, we’re urging baby steps and hoping that a thousand flowers bloom.
We’re telling our colleagues to be creative. Our faculty and staff are leveraging AI for diverse tasks, from drafting emails to crafting grant budget justifications. They have integrated AI to enhance student learning, offering structured opportunities for ethical AI exploration in coursework. Notably, doctoral students are pioneering AI-assisted qualitative inquiry methods, such as using AI to complete initial coding for interview transcripts and field notes. They’re also researching AI applications in specific educational contexts, like K-12 classrooms and online learning environments.
At the same time, and in the long term, we need to recognize that AI’s most significant influence on education will not necessarily be how it is used in the classroom, but rather how it changes the world within which classrooms exist. We missed the boat on social media. Let us not let that happen again.
If, in 2008, we had foreseen that social media would maximize engagement through addictive features and extreme content, educators could have inoculated students more effectively against such manipulation. We could have built stronger media literacy skills and raised awareness about how these technologies hijack our most important resource—attention. If you think our attention is hijacked now, it will get exponentially worse with AI agents.
The key to navigating this new landscape may not lie in changing AI but in better understanding and adapting ourselves. To avoid getting sucked in by AI, we need a deeper understanding of ourselves. Of our biases. Of our cognitive shortcuts. In the end, the biggest challenge isn’t understanding AI. It’s understanding ourselves.
That means seeing our students not as plagiarists out to cheat the system but instead as co-creators of this future—a future that none of us can predict. AI is new for us, as it is for them. But as those responsible for educating the next generation of educators, we must get off the sidelines and into the game with honesty, transparency, and humility.
Punya Mishra is the Director of Innovative Learning Futures at the Learning Engineering Institute and a Professor in the Mary Lou Fulton College for Teaching and Learning Innovation at Arizona State University. He and MLFC colleagues created a professional development online course, Generative AI: A New Learning Partner, which is offered as a self-paced course. Instructor-facilitated options are available for larger groups. The course is designed to help educators engage with GenAI as a learning partner while exploring ethical considerations.
This blog is part of our AI Learning Series, where CRPE invites thought leaders, educators, researchers, and innovators to explore the transformative potential—and complex challenges—of AI in education. From personalized learning platforms to AI-powered administrative tools, the education landscape is evolving rapidly. Yet, alongside opportunities for increased equity, efficiency, and engagement come pressing questions about ethics, accessibility, and the human element in learning environments. Join us as we unpack the promise, the pitfalls, and the path forward for AI in schools.