Protecting Students. Empowering Educators. Building Trustworthy EdTech.

AI is entering classrooms faster than the safeguards necessary to protect children. The AI Ethics Index for K-12 Education provides rigorous, education-specific evaluation frameworks for the AI systems shaping how children learn.  

Why K-12 Students Need to be Protected

Children Are Not Smaller Adults

AI systems built for general use carry different risks when deployed in schools. The ethical stakes are higher. The safety margins must be wider. The evidence bar must be higher.  

Developmental Vulnerability

 Children’s cognitive, emotional, and social development is malleable and ongoing. AI systems that shape how young people think, feel, or relate to others carry long-term consequences that may not surface for years.  

Power Asymmetry

Students cannot opt out of school-mandated technology. They cannot negotiate terms of service. They often cannot distinguish between AI-generated content and human input.  

Trust Formation

 The relationships students form with AI systems during formative years may shape their baseline expectations for human-AI interaction throughout their lives.  

Data Permanence

 Data collected about children today may follow them into adulthood. Behavioral profiles, learning records, and interaction logs create shadows that persist.  

Institutional Responsibility

 Schools act in loco parentis. Their legal and moral duty to protect students extends to every tool they deploy.  

 

These realities demand evaluation frameworks designed specifically for educational contexts.  

Model Design and Development

Assesses whether the system’s objectives, assumptions, and constraints are clearly articulated, justified, and appropriate for the intended use and societal context.

Fairness

Evaluates disparate impact, representational harms, and structural biases using quantitative tests and contextual analysis.

Privacy & Data Stewardship

Examines data provenance, collection practices, consent pathways, retention policies, and risks of re-identification or exposure.

Transparency

Measures the clarity, completeness, and accessibility of documentation, disclosures, interpretability tools, and known limitations.

Knowledge & Attribution

Evaluates factual accuracy, error modes, hallucination profiles, citation reliability, and the system’s capacity to differentiate fact from inference.

Human–AI Interaction

Assesses usability, clarity of affordances, risk of misuse or over-reliance, and differential effects across user groups.

Safety & Security

Tests adversarial resilience, jailbreak resistance, harmful content refusal, robustness under stress conditions, and safe-failure behavior.

Societal Impact

Evaluates downstream and second-order effects on communities, institutions, equity, labor, democratic trust, and public wellbeing.

Governance & Accountability

Assesses internal governance structures, documentation practices, incident response, versioning, and mechanisms for redress.

What We Evaluate

Building on the core framework of the AI Ethics Index, our K-12 framework focuses evaluation on what matters most in schools:  

CHILD SAFETY & WELLBEING

Does this system protect students from harm?  

  • Appropriate boundaries
  • Vulnerable student protections  
  • Crisis recognition and response
  • Harmful content safeguards
  • Manipulation resistance

LEARNING EFFICACY & INTEGRITY

Does this system actually support learning?  

  • Academic integrity
  • Engagement vs. education
  • Evidence of outcomes
  • Pedagogical alignment
  • Assessment validity

PRIVACY & DATA STEWARDSHIP

Does this system protect students information?  

  • Third-party sharing
  • FERPA/COPPA compliance 
  • Collection practices
  • Consent mechanisms
  • Retention policies

EQUITY & ACCESSIBILITY 

Does this system work for all students?   

  • Socioeconomic assumptions
  • Gap impacts
  • Demographic performance
  • Disability accessibility
  • Language support

HUMAN AGENCY & RELATIONSHIP

Does this system support healthy development?   

  • AI limitation transparency
  • Human judgment preservation 
  • Dependency patterns
  • Teacher relationship impact
  • Critical thinking support

For School & Districts

Make Decisions with Confidence

The AI Ethics Index provides clear evaluation criteria, procurement guidance, risk visibility, and audit-ready documentation—practical tools that fit real-world constraints.  

 

We’re working with pioneering districts to develop assessment approaches that work within limited budgets, limited time, and unlimited responsibility.  

For EdTech Builders

Build Trust into Your Projects

We’re developing frameworks that help EdTech companies design with safety prioritized from the start, earn stakeholder trust through third-party evaluation, prepare for emerging regulations, and differentiate on verified ethical integrity.  

 

We’re seeking partners to co-develop evaluation protocols, pilot assessment processes, and establish the first cohort of ethically certified educational AI systems.  

For Policymakers & Foundations

Build Public Infrastructure

Current oversight mechanisms were not designed for AI in schools. We’re seeking policy partners and philanthropic support to accelerate development, expand access, and establish education-specific evaluation as a public good.  

Get Involved

The K-12 framework is in active development with research institutions and EdTech organizations. If you’re working on AI in education and want to be part of shaping how it’s evaluated, we’d like to hear from you.  

About Just Horizons Alliance

The AI Ethics Index is an initiative of the Just Horizons Alliance, a 501(c)(3) public charity advancing responsible, human-centered innovation. Our work spans AI ethics, computational social science, simulation modeling, and the design of systems that strengthen human dignity, equity, and societal wellbeing.