• Home
  • |
  • Publications
  • |
  • New state AI policies released: Signs point to inconsistency and fragmentation

New state AI policies released: Signs point to inconsistency and fragmentation

In October 2023, CRPE reported that only California and Oregon had provided schools with guidance on navigating AI, while 13 other states planned to release similar guidance. Since then, three additional states have weighed in: North CarolinaWest Virginia, and WashingtonVirginia’s governor also released an executive order for “AI integration throughout education” that directs its state agencies to lead AI integration efforts in schools and classrooms. Some state legislatures are also proposing new laws: Kentucky would require its Department of Education to develop AI guidelines by July and mandate training next year; Tennessee would require districts to create their own AI policies by Fall 2024 in partnership with higher education organizations. 

A year ago, the AI conversation centered on plagiarism and bans. Now, most guidance focuses more on urging educators to accept AI and use the technology to enhance teacher effectiveness. Virginia’s guidelines state, for example: “AI can unlock new realms of knowledge that were previously unimaginable…. However, AI also poses risks that must be carefully managed…. AI should never fully replace teachers, who nurture students’ critical thinking, values, and character development.”

Despite these early movers, AI recommendations, policies, and access to training for educators are, by and large, ambiguous and underdeveloped. Some states intentionally staying out of the conversation while others lean in. Given that the U.S. Department of Education may wait until as late as the end of 2024 to release more resources, signs point to a potentially decentralized and fragmented set of approaches to AI. Meanwhile, districts continue to ask for more help. Given the rapid evolution, widespread adoption, and often difficult-to-detect nature of AI, decentralization might be the most sensible approach to regulation.

What can we learn from the latest guidance, and what are the implications for AI adoption and equity of access? We look at two states, North Carolina and West Virginia, that have provided the most substantial recommendations to date. While they share some common ground, these two states’ policies also differ in key areas. These varying approaches may provide the most insight into how the AI educational landscape is shaping up and how proactive states might build unique conditions for different expectations, access, and visions for AI integration in their schools.

Both North Carolina and West Virginia provide policy guidance and resources but differ philosophically over AI’s use and implications 

North Carolina’s and West Virginia’s policy documents share some commonalities.

  1. They both acknowledge AI’s massive potential for learning. 
  2. They both cite the need for safe-use policies.
  3. They both provide links to curriculum and instructional resources.
  4. They both rely on external expertise (at least so far).

However, their guidance differs in tone, language, and priorities—especially around critical topics like plagiarism, AI literacy, and risk management.

Plagiarism. North Carolina takes a particularly innovative stance on plagiarism, asking its leaders to “rethink plagiarism and cheating in the age of AI.” In preparing for a future where “it will be a common assumption that all writing … may be written with AI,” its policy document explains, 

“it is perhaps shortsighted to automatically consider all use of AI as ‘cheating.’ Educators must rethink their ideas of what constitutes plagiarism and cheating in today’s world and adapt their teaching assignments and expectations to this new reality.” 

It recommends schools use a red-yellow-green “AI Acceptable Use Scale” to guide decisions about what level, if any, of a task warrants AI use. By sharing this proposed new scale, the state hopes to build a shared understanding and common language around AI, student use, and the potential for plagiarism.

West Virginia also presents a framework for teachers to consider including AI in assignments, but its language includes punitive measures should plagiarism occur. Compare North Carolina’s language to West Virginia’s:

students and staff shall not copy from any source, including generative AI, without prior approval and accurate documentation … Teachers must be clear about when and how AI tools may be used to complete assignments and restructure assignments to reduce opportunities for plagiarism by requiring personal context, original arguments, or original data collection.”

AI literacy. West Virginia generally frames AI adoption as a commitment from AI users, asking districts, staff, schools, administrators, educators, and students to “commit” to its outlined AI principles. It frames AI literacy through the lens of computational thinking, limiting its AI curricular recommendations to courses that teach computer science and career and technical education standards.

In contrast, North Carolina’s language emphasizes the need for adults to comprehensively educate all AI users, stating, “…it is imperative that all schools and districts ensure all staff and students are AI literate, and that AI literacy is infused in all curriculum areas.” Its guidance document dedicates four pages to “AI Literacy for All,” which recommends that schools teach AI literacy across students’ entire K-12 career and provides differentiated AI literacy strategies parsed by grade span. It also takes a broad stance on curricular adoption, encouraging “infusing AI literacy in all curriculum” and describing a future where AI is integrated across all subjects. It dedicates another three pages to ways that teachers might use generative AI to manage daily work tasks, assess student work, and aid students in learning. 

Acceptable-use approaches. While North Carolina’s policy provides just a few paragraphs on data privacy risks and security limitations inherent to AI adoption, West Virginia’s addresses a range of potential unintended outcomes over three-plus pages of text. They both acknowledge the risks of AI capturing personally identifiable information and encourage districts to disclose all AI-enabled tools eligible for classroom use. However, West Virginia will require all students under 18 to get parental permission to use AI tools, stating that “strict policies, consideration, and discretion are essential when integrating AI in educational settings to safeguard against any privacy breaches or misuse of student data.” In contrast, North Carolina has no such restriction. It encourages leaders to consider using a growing list of potential “built-for-education” model options like ChatGPT and Khanmigo while developing their AI guidelines.

 

Implications: What will the future of AI guidance look like? 

All five states that have provided AI guidance for school systems say they will work closely with district and school leaders to navigate this fast-evolving terrain—something district leaders desperately want. Each state’s guidance documents provide inspiration and ideas for the next set of guidelines and may even help inform districts whose state leaders stay silent. AI implementation unquestionably will continue to be an evolving and often unpredictable process, but these early movers may help us understand where the AI policy landscape is heading.  

Districts will still need to lead on AI policy-setting. Even with state support, these early guidance tools are just broad suggestions. Districts will still need to undertake the time-consuming task of determining which policies should change and the language to use. They will continue to have significant discretion in shaping their AI approaches, as these guidance documents are what they say they are: guidance. States plan to update them routinely, meaning that districts will have to do the same. 

Regional differences in district policies could emerge as additional states weigh in. Before districts chart a course, states that weigh in early have a significant opportunity to shape educators’ mindsets and approaches. Given the range of state approaches, we will likely see state-based variation and potential fragmentation in how districts integrate AI into students’ and adults’ day-to-day work. Within the next few years, we may find that teachers in different states experience varying expectations, levels of training and support, and everyday work. As a result, their students will receive widely varied AI literacy education and access to AI-enabled tools.

States with broader visions for an AI-enabled workforce may establish more AI-friendly educational settings for their students. North Carolina’s guidance is grounded in a new vision for its workforce. It opens with a review of statistics from the 2023 World Economic Forum’s Future of Jobs Report, which predicts 75% of companies will implement generative AI by 2025 and names AI and Machine Learning as the fastest-growing job fields. Virginia frames its charge to integrate AI into education with a prediction that “this emerging technology promises to catalyze business innovation and economic growth for the Commonwealth.” West Virginia, in contrast, does not present a new vision for its state workforce but focuses on AI’s value in building skills needed for technology and computer science. States that embrace and acknowledge how AI will reshape the workforce will likely create the most favorable conditions and opportunities for students and educators to access AI-enabled learning and work.

Related Publications

Skip to content