• Home
  • |
  • Publications
  • |
  • AI and Education Policy 101: The Evolving Landscape and Examples from Early Adopters

AI and Education Policy 101: The Evolving Landscape and Examples from Early Adopters

News broke last week on litigation against Hingham Public Schools in Massachusetts, where a high school senior was disciplined and given a failing grade for using AI assistance on a school assignment. The student’s parents filed suit on the grounds that the district had no official policies on AI usage in place. 

Districts and states may be wondering what they can do to prevent such a scenario and whether they need to establish or fortify their existing guidelines. Given CRPE’s research and other work in AI, we’ve developed a preliminary guide to help school systems navigate this moment. 

What we know so far

Most districts are not setting AI guidance—yet. Last fall, CRPE and RAND’s survey of national superintendents found that just 5%  had set policies on AI. Another 31% reported plans to develop policies in the future. CRPE’s latest analysis of AI “early adopter” school systems found that many of these forward-thinking districts (26 out of 40, or 65%) have published AI policies, but certainly not all of them. 

The policy landscape for AI in education remains fragmented, with limited federal guidance and inconsistent state policies. While some states have shared frameworks for AI integration, many schools are left without clear direction, leaving districts to develop their own policies on a complicated, fast-changing technology. This is a burdensome task for any district, especially those with limited capacity and resources. A lack of coherent state or federal guidance also increases the risk of inequitable adoption of AI technologies and widening gaps in access to critical resources.

AI use in schools is equally fragmented. CRPE and RAND’s Fall 2023 survey found that only 18% of teachers report regularly using AI tools in the classroom, and even fewer are using tools recommended by their schools or peers, underscoring the need for more robust guidance and professional development.

We lack a track record of getting ahead of technological innovation in schools. Historically, states have provided limited support for other technology trends in education, such as the Internet and social media, leading to reactive implementation of these technologies. We risk a similar pattern emerging with AI, where many schools are left to navigate its complexities without adequate guidance.

What should districts do?

All the above conditions have created uncertainty, with limited examples of how to approach AI policymaking. However, there are guiding principles for districts emerging from early adopters that have put thoughtful frameworks and policies in place:

  1. When setting AI policy or guidance, plan to revisit and update it as needed. Collect data from students and educators to assess effectiveness in the first years of implementation. Consider bringing up AI policies for frequent review.
  2. Parallel to any new AI policy, build a framework that includes a positive vision for what AI means in your school district and how it can help students, teachers, and administrators. Build this in collaboration with key stakeholders. ABC Unified and Gwinnett County are examples.
  3. Policies and contracts should prioritize strong data privacy protections. Data sharing between districts and educational technology providers can improve AI tools, but this must be balanced with robust privacy safeguards to protect student information. The California Department of Education’s Learning With AI, Learning About AI guidance outlines how districts can evaluate AI vendors’ terms of use and data collection practices. Iowa City Community School District’s AI in the Education Environment Regulation is a board policy that regulates AI tool selection.
  4. Guidance should address academic integrity and acceptable use with both clarity and balance. Given the ubiquity of AI—including its embedment in Google and Safari—students and educators can benefit from language that clarifies which assignments allow which type of AI use, if any. Lynwood Unified has created a publicly available folder with draft policies for generative AI. It includes a working draft of its responsible use policy for AI tools and student-friendly ChatGPT guidance that frames usage around student learning and student-teacher relationships. Simultaneously, guidance should be flexible enough for variation and continuing technological advancements. This will enable students and educators flexibility to explore and learn as tools develop. Systems may opt to share guidance documents, like Santa Ana Unified’s AI Compass, rather than make formal policy changes.
  5. Once you establish any new AI policy or framework, promote AI literacy for both educators and students. AI literacy is vital for ensuring effective and equitable usage and helping all involved parties—educators, students, and families—understand how AI works and its potential impact on education.
  6. Strong leadership and transparency should be key in establishing public trust in AI. This will require open communication and active engagement with your communities. Clearly display policies and guidance on publicly available tools. Hold public town halls to discuss the application of your policy and provide examples. Make AI a part of your leadership vocabulary, because it’s not going away.

As we learned during the pandemic, students and educators benefit from proactive, thoughtful approaches to new technology and change. School systems that wait to acknowledge AI risk furthering confusion, misuse, and political vulnerability.

 

Related Publications

Skip to content