Introduction
Educators at all levels are grappling with how to address AI’s risks and benefits. In the absence of much in the way of federal policy, states have a critical role to play. How are states playing this role, and what actions are they taking? A new database from CRPE provides insight.
For the past three years, CRPE has been studying and reporting on AI early adopter school districts to learn from their experiences. In the fall of 2025, we launched a new study of early adopter states to learn how they are approaching AI adoption and integration. Today, we are releasing a database of early state actions on AI, alongside preliminary analysis. The database compiles publicly available information on how 20 Early Adopter states (nominated by state and national AI educational experts, intermediary organizations, and advocates) are exploring AI integration and adoption in K-12 education.
This database builds on CRPE’s earlier analysis of the first states with AI guidance, where we reported inconsistency and fragmentation in state AI policy.
This Early Adopter state database includes information ranging from the AI guidance issued by states to relevant legislation and executive orders issued in the last two years (2024 and 2025), to which professional development or pilot programs are in place, and more. Viewers of the database can choose to learn more about specific state actions, compare actions across states, and identify potential models for partnerships, professional development, or pilot programs to apply in their own context.
In general, these early adopter states are—at this point—taking a very cautious approach to regulation, focusing more on building educator capacity and piloting than on large-scale regulation or tool adoption. Of the 20 states we examined, they commonly provide professional learning opportunities (18 of 20), leverage partnerships to support AI goals (17 of 20), issue guidance (16 of 20), and pilot AI strategies or tools (11 of 20). This caution towards moving beyond guidance may reflect uncertainty and a lack of legislative direction on how best to regulate AI in education in this rapidly evolving policy landscape.
This database represents our initial scan of the state AI integration landscape, surfacing emerging patterns, areas of convergence and divergence, and where states are learning through experimentation. As of early 2026, we are currently collecting more data through a national survey and focus groups with SEA leaders and partners. We aim to share deeper analyses on the challenges states hope AI can help solve and what helps (or hinders) their progress later this spring.
Flexibility Over Directives
These states appear to be using guidance to signal priorities while not positioning themselves as enforcers, relying on a “local control” approach to AI decision-making. Sixteen of 20 have issued AI guidance, all of which is non-binding, and many are updating guidance regularly. For example, since publishing guidance in May 2024, Arizona has issued three updates. North Carolina, which first released guidance in January 2024, has issued 20 updates in the two years since. This approach allows early adopter states to adapt, revisit, and update guidance as implementation and AI technologies progress.
State legislatures are also getting involved. In 19 of the 20 states (95%) in the database, legislatures introduced at least one AI-related bill in 2024 or 2025 that involved K-12 education and/or impacted the SEA. Examples include California’s SB 1288, which required the Superintendent of Public Instruction to convene a working group, develop guidance, and provide local education agencies (LEAs) with a model policy for the safe and effective use of AI that LEAs may choose whether or not to adopt. In Ohio, the legislature went a step further with HB 96, which not only required the SEA to develop a model policy on AI use in schools in December 2025 but also mandated that LEAs adopt their own AI policies by July 31, 2026.
Pilots and Professional Learning: Early Levers for Exploring AI
Most states’ AI actions center on learning and prioritize educators. Almost all (18 of 20) of the states in our database have helped facilitate AI-related professional development opportunities (e.g., summits, webinars, workshops, courses, grant programs) to build educators’ understanding of AI and how it can support teaching and learning. While the depth and quality of these opportunities vary significantly, states seem to consider professional learning and AI literacy key elements of AI adoption.
- California offers an AI Webinar Series called “Learning with AI, Learning about AI.” Each webinar features speakers, actionable content, and a learner-centered approach.
- North Carolina offered a weekly webinar series on the responsible implementation of AI and AI Literacy. Educators who participated in 2024-25 were eligible to receive credit and a certificate of attendance, and all educators can access recorded webinars on demand.
- In Massachusetts, the Department of Elementary and Secondary Education has created modules, a webinar series, and a course with a certificate of completion to build AI literacy for educators. In addition, the Massachusetts Executive Office of Education launched Future Ready: AI in the Classroom, a professional development pilot to support 45 educators in bringing AI into their classrooms.
- Rhode Island partnered with Khan Academy to offer Khanmigo, an AI teaching assistant tool for educators and a study buddy for students, to all LEAs during the 2024-25 and 2025-26 school years at no cost.
- In Indiana, 2,466 teachers and 45,244 students from 112 schools received a one-time competitive grant to pilot an AI-powered platform of their choice for the 2023-2024 school year. The Indiana Department of Education produced a final impact report and continued to support AI literacy building and adoption through other state-led grants.
- In Louisiana, the Department of Education’s Board of Elementary and Secondary Education endorsed and funded pilots for three AI educational tools across the state—Amira, Zearn, and Khanmigo. The state measured and shared results on the impact of these efforts and is using this data to explore additional evidence-based programs and pilots for future use.
Strong Reliance on Partners to Advance AI
Most of the states in the database (17 of 20) are leveraging partnerships with institutions of higher education, industry, or nonprofits with AI expertise (such as aiEDU, TeachAI, AI for Education, ISTE, CoSN, Digital Promise, and AI4K12) to expand AI literacy and capacity. In some cases, partners have helped SEAs develop AI guidance, provide professional development, and design or implement pilot programs.
In Colorado, the Department of Education and the Colorado Education Initiative (CEI) have leveraged their partnership to develop and disseminate resources, provide workshops and professional development, and foster coherence across the education ecosystem. The state’s Office of Economic Development and Trade is funding CEI’s Elevate AI program, which reenvisions the teaching profession in the age of AI, to equip educators with skills to leverage AI and prepare students for a future where AI is integral.
In Arizona, the Department of Education (ADOE) was a member of a statewide team that developed AI guidance for Arizona schools. Partners at the Arizona Institute for Education and the Economy at Northern Arizona University publish and maintain the document. The ADOE is also a member of the AZ AI Alliance, a group of education organizations committed to the responsible, ethical, and effective implementation of AI in Arizona’s schools.
AI Strategies Are Fairly Uniform Across Political Contexts
State-level political leadership does not appear to significantly influence the type of action the 20 states in the database have taken to address AI in K-12 education. Of these early adopter states, seven are in Democratic-leaning states (35%), eight are in Republican-leaning states (40%), and five are in states with divided political leadership (25%). States are pursuing strategies at similar rates regardless of political context (with the exception of state-supported AI pilots, where just one Democrat-leaning state, Rhode Island, is running a statewide AI initiative).
| State Strategy | Democrat-Leaning States (7) | Republican-Leaning States (8) | Divided States (5) |
|---|---|---|---|
| Issue Guidance | 86% (6) | 63% (5) | 100% (5) |
| Support Professional Learning | 71% (5) | 88% (7) | 100% (5) |
| Offer Pilot Opportunities | 14% (1) | 75% (6) | 80% (4) |
| Leverage Partnerships | 71% (5) | 63% (5) | 80% (4) |
Alongside limited federal guidance and evolving state politics, the rapid pace of change at all levels of government raises questions about whether states can (or must) adjust their role. Politics will likely undergird some of the choices states make, especially as local and national political contexts shift and more state legislators take notice of AI use in education. During the 2025 legislative session, 53 bills were proposed on the use of AI in education across 21 states. In 2026, legislatures now face increasing tension between federal pushes for AI deregulation and growing local concern about AI’s safety, potential misuses, and impact on the environment. Our interviews with state leaders will provide more insight into what political dynamics or factors are influencing their decisions.
While state-level stances on AI may reflect necessary flexibility for this moment, they may also reflect the lack of clarity states have about how AI will reshape aspects of K-12 education and what meaningful AI integration truly entails. Without clear direction and investment, gaps in infrastructure, access, and capacity could deepen, leaving some states and districts positioned to harness AI’s benefits while others fall further behind.
State Education Agencies (SEAs) in these early adopter states are attempting to influence district use of AI through guidance, training, and pilot opportunities. This is a reasonable starting point, but AI advances aren’t abiding by school year calendars and review cycles. Given that students and educators are already using AI tools and that the technology is evolving at a breakneck pace, SEA influence alone may not be enough. A lack of clear policies could inadvertently create more risks and vulnerabilities.
States could lean in with clearer expectations, resources to catalyze responsible exploration, and targeted funding to define what responsible AI use does and does not look like. They have an opportunity to clarify expectations, coordinate strategy, and absorb some of the policy and compliance load that districts are struggling to manage on their own. They can help address districts’ requests for more support.
Below are some actions that states might consider, based on how some of the early adopter states we studied began their efforts.
- Signal a clearer commitment to AI. State legislatures can establish an office of AI policy to oversee responsible AI use in the state, or set up statewide task forces to bring partners together to examine AI issues more broadly. At least four states and the Department of Defense Education Activity fund AI-specific support positions.
- Set clearer guardrails for procurement and data privacy. States can negotiate and steward statewide contracts with edtech vendors that make AI tools more affordable, secure, and vetted for quality, as Utah has done. Clear procurement guidance and model privacy standards reduce risk, lower costs, and prevent every district from reinventing the wheel.
- Act as a trusted source of information. AI products and promises of effectiveness are flooding districts. SEAs can curate and disseminate high-quality resources on AI tools, responsible use, and AI literacy to help leaders and educators separate signals from noise. California, North Carolina, Massachusetts, and others have done this already.
- Create space and incentives for responsible innovation. If states want to develop creative, effective AI integration, they need to make room for experimentation. That could mean rethinking assessment mandates, funding pilot programs to test new impact measures, supporting AI-enabled data collection tools, and launching RFP processes that prioritize responsible, evidence-informed AI-powered learning design. States can also provide common evaluation frameworks so districts don’t have to assess tools without any support. Indiana, Louisiana, and Rhode Island are models for piloting and measuring new uses of AI.
- Level the technology playing field. States must expand broadband partnerships, invest in infrastructure, and ensure that rural and underresourced districts have access to emerging tools. Tool quality and computing power are part of an emerging access gap: students who only have access to free or early edition models are not on the same playing field as those with access to frontier or paid models. Access has to be built into adoption from the start to prevent AI from inadvertently widening gaps. While none of the states in our database have fully solved for this, some are closer than others. In Massachusetts and North Dakota, more than 99% of residential locations have fixed broadband availability. In Utah and Washington, more than 95% of residents live in households with both a computer and a broadband subscription.
If this moment calls for states to evolve, we have to be honest about what that means. Most SEAs do not have the resources to staff dedicated AI teams. Few have the procurement, data science, or legal capacity to independently vet a rapidly changing marketplace. Without new investment and cross-sector support, expectations will outpace reality.
This is a moment for shared responsibility. Philanthropy can fund pilots and capacity-building. Researchers can develop research agendas and evaluation frameworks to help generate evidence about the AI strategies and tools that are (and are not) working. Technology partners can commit to transparency and responsible design. State and federal policymakers can create sustainable funding streams that align with innovation and infrastructure. If we want states to lead, then the ecosystem must back them.
CRPE is holding conversations with state leaders in 2026 and will report back with deeper insight into the practical, political, and capacity challenges shaping states’ AI strategies.
Methodology Notes
The database compiles publicly available information captured from November 2025 to January 2026 on how 20 Early Adopter states are exploring AI integration and adoption in K-12 education. States were recommended by experts in the field during interviews conducted in fall 2025 or recognized for our study after having taken some form of action to address and/or integrate AI into their state educational priorities. Our research team also considered geography and political variety.
CRPE drew on interviews with field leaders and our AI Early Adopter School District findings to create a schema of potential AI-related actions states might take. We then reviewed and conducted thorough research on publicly available information on recommended states, updated our schema, and catalogued findings into our State AI Early Adopter Database.
The database is not evaluative or meant to be an exhaustive account of all early actions and state AI adoption efforts.