📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Methodology

Sentience Readiness Index

Complete documentation of how we measure societal readiness for artificial sentience. Published in the interest of transparency and reproducibility.

Version 2.1 Last Updated: December 2025

Overview

What This Index Is
  • A conditions-based assessment of societal readiness
  • A research tool based on transparent methodology
  • A comparative framework for understanding differences across jurisdictions
  • An educational resource to inform public discourse
What This Index Is Not
  • A position on any specific legislation or policy proposal
  • A prediction of if or when AI sentience will emerge
  • A determination of whether any current AI is sentient
  • A rating of individual legislators or political parties
  • An advocacy tool or call to action

Purpose: The SRI measures how ready societies are to navigate the possibility of artificial sentience. It does not assess whether AI sentience is likely, imminent, or desirable. Rather, it evaluates whether societal conditions support informed, adaptive responses if and when such questions become practically relevant.

Organizational Note: The Harder Problem Project is a 501(c)(3) educational organization. This index assesses conditions; it does not advocate for or against specific legislation.

What We Measure

Core Question

"How well-positioned is this jurisdiction to recognize, evaluate, and respond to potential artificial sentience in an informed, adaptive manner?"

Key Concepts
Readiness

Having the institutional capacity, policy flexibility, professional resources, and public understanding necessary to navigate novel questions, regardless of how those questions are ultimately answered.

Conditions

The current state of laws, institutions, discourse, resources, and adaptive mechanisms, not the merit of any proposed changes.

Sentience

The capacity for subjective experience. We use this term without taking a position on which systems (if any) currently possess it or will possess it in the future.

Framework & Categories

The SRI assesses six categories, each scored 0-100. The overall score is a weighted average.

A. Policy Environment
20%

Legal and policy frameworks that allow for open inquiry into and potential recognition of artificial sentience.

B. Institutional Engagement
15%

Government bodies, academic institutions, and professional organizations actively engaging with AI consciousness questions.

C. Research Environment
15%

Freedom and capacity to conduct research relevant to AI consciousness, machine sentience, and related questions.

D. Professional Readiness
20%

Preparation of healthcare, legal, media, and education professionals to navigate AI consciousness questions.

E. Public Discourse Quality
15%

Quality, informedness, and maturity of public conversation about AI consciousness and sentience.

F. Adaptive Capacity
15%

Ability of legal, policy, and institutional systems to update and adapt as understanding evolves.

Status Classifications

80-100

Well Prepared

60-79

Moderately Prepared

40-59

Partially Prepared

20-39

Minimally Prepared

0-19

Unprepared

Indicators & Scoring Criteria

Each category contains specific indicators with detailed scoring rubrics.

Category A: Policy Environment (20%)

The degree to which existing legal and policy frameworks allow for open inquiry into, and potential recognition of, artificial sentience. This is assessed without judging the merit of any specific proposed legislation.

A1. Legal Definitional Flexibility (0-25 points)

Do existing legal definitions of persons, entities, property, or rights allow for potential future expansion or clarification?

21-25 High flexibility with precedent for expansion
11-20 Moderate flexibility, some precedent
1-10 Limited flexibility, rigid definitions
0 Definitions explicitly foreclose expansion
A2. Policy Framework Existence (0-25 points)

Are there existing policy frameworks, study commissions, or official processes for addressing AI consciousness questions?

21-25 Established frameworks with active processes
11-20 Some frameworks or commissions exist
1-10 Minimal or nascent efforts
0 No frameworks or processes
A3. Regulatory Openness (0-25 points)

Do regulatory bodies have the flexibility and mandate to address novel questions about AI capabilities and status?

A4. Foreclosure Status (0-25 points)

Have legal or regulatory measures been enacted that foreclose inquiry into or recognition of AI sentience?

Important: This indicator assesses the current state of enacted measures—what is currently law or regulation. It does not assess pending legislation or take positions on proposed bills.

Category B: Institutional Engagement (15%)

The degree to which government bodies, academic institutions, and professional organizations and other institutions are actively engaging with questions related to AI consciousness.

B1. Government Attention (0-33 points)

Have government bodies—legislative, executive, or advisory—substantively addressed AI consciousness or sentience questions?

B2. Academic Engagement (0-33 points)

Are academic institutions—universities, research centers, scholarly bodies—actively engaging with these questions?

B3. Professional Organization Engagement (0-34 points)

Have relevant professional organizations (medical, legal, technical, ethical) addressed AI consciousness questions?

Category C: Research Environment (15%)

The freedom and capacity to conduct research relevant to AI consciousness, machine sentience, and related questions.

C1. Research Freedom (0-50 points)

Are researchers free to study AI consciousness, machine sentience, and related topics without legal, institutional, or funding restrictions?

C2. Research Capacity (0-50 points)

Does the jurisdiction have active research capacity (researchers, institutions, funding) relevant to these questions?

Category D: Professional Readiness (20%)

The preparation of key professional communities to navigate questions and situations related to AI consciousness.

D1. Healthcare Professional Readiness (0-25 points)

Are healthcare professionals equipped with awareness and resources to navigate AI-related presentations or questions?

D2. Legal Professional Readiness (0-25 points)

Are legal professionals equipped to navigate novel questions about AI status, rights, or recognition?

D3. Media Professional Readiness (0-25 points)

Are journalists and media professionals equipped to cover AI consciousness topics accurately and responsibly?

D4. Educator Readiness (0-25 points)

Are educators—K-12 and higher education—equipped to address AI consciousness questions with students?

Category E: Public Discourse Quality (15%)

The quality, informedness, and maturity of public conversation about AI consciousness and sentience.

E1. Public Awareness (0-33 points)

Is the general public aware that questions about AI consciousness are subjects of legitimate inquiry?

E2. Discourse Quality (0-34 points)

When the topic is discussed publicly, is the discourse informed, nuanced, and productive?

E3. Stigma Level (0-33 points)

Is there stigma attached to seriously discussing AI consciousness, and does it impede productive conversation?

Category F: Adaptive Capacity (15%)

The ability of legal, policy, and institutional systems to update and adapt as scientific understanding and technological capabilities evolve.

F1. Legal Adaptive Mechanisms (0-33 points)

Do legal systems have mechanisms for updating frameworks as knowledge evolves?

F2. Institutional Learning Capacity (0-33 points)

Do institutions demonstrate the capacity to learn and update based on new information?

F3. Course-Correction Ability (0-34 points)

If current approaches prove inadequate, can the jurisdiction change course?

Data Sources

Primary Sources
  • Official government documents and legislation
  • Published court decisions and legal opinions
  • Peer-reviewed academic research
  • Official statements from professional organizations
  • Major news outlets and quality journalism
  • Government statistics and public records
Source Reliability Hierarchy
Highest Reliability

Official government sources, enacted legislation, court decisions

High Reliability

Peer-reviewed research, major news outlets, professional organizations

Moderate Reliability

Expert commentary, industry reports, quality think tanks

Lower Reliability

Blogs, social media, advocacy materials (used cautiously for context)

Temporal Scope
  • Assessments cover the current state as of the assessment date
  • Recent developments (past 2 years) are weighted more heavily than older conditions
  • Historical context is noted but does not override current conditions

Assessment Process

1
Data Collection

Gather sources across all indicator categories

1-3 weeks per jurisdiction
2
LLM-Assisted Assessment

Advanced LLM with extended thinking generates initial assessment using standardized prompt

1-2 days per jurisdiction
3
Human Review

Staff analyst reviews LLM assessment for accuracy, methodology compliance, and editorial standards

3-5 days per jurisdiction
4
Editorial Review

Senior editor reviews for consistency, neutrality, and compliance with organizational standards

2-3 days per jurisdiction
5
Publication

Assessment published with full methodology notes

Update Cycle
Full Assessments

Annual

Significant Updates

As warranted by major developments

Corrections

Ongoing as errors are identified

Human Review Protocol

Phase 3: Staff Analyst Review

Verify accuracy of LLM assessment, check methodology compliance, and ensure all claims are properly sourced.

Checklist
Factual Accuracy
  • All cited facts are accurate
  • All cited sources exist and say what is claimed
  • No significant facts are omitted
  • Dates and jurisdictions are correct
Methodology Compliance
  • Scoring follows rubric criteria
  • Scores are justified by evidence
  • Confidence ratings are appropriate
Neutrality Verification
  • No specific legislation characterized as good/bad
  • No calls to action
  • Multiple perspectives represented
Phase 4: Editorial Review

Ensure consistency across assessments, verify neutrality, and confirm compliance with organizational standards.

Checklist
Consistency
  • Scoring consistent with other jurisdictions
  • Similar conditions receive similar scores
  • Terminology is consistent
Neutrality (Enhanced)
  • Could be read by ANY policy advocate without perceiving bias
  • No "dog whistles" or subtle position-taking
  • Does not imply what policy "should" be
Organizational Compliance
  • Reflects 501(c)(3) educational mission
  • No content construable as lobbying
  • Disclaimers and disclosures present

Limitations & Caveats

Methodological Limitations
  • Novel field with limited established metrics
  • Subjective elements in scoring despite rubrics
  • Data availability varies across jurisdictions
  • LLM-assisted analysis has inherent limitations
  • Conditions change faster than annual updates
Scope Limitations
  • Only assesses conditions, not policy merit
  • Cannot predict future developments
  • Does not assess individual AI systems
  • May miss informal or emerging dynamics
  • Cross-jurisdictional comparisons have limits
Interpretation Guidance
  • This index does not determine which jurisdictions are "right"—it assesses conditions against a readiness framework
  • Lower scores do not mean a jurisdiction is "bad"—they indicate that, according to our framework, readiness conditions are less developed
  • This index is one input, not a verdict—users should consider multiple sources and perspectives

Changelog

v2.1 December 2025
Documentation Update

Published updated methodology documentation to website with minor refinements to scoring rules and LLM assessment prompt. Enhanced clarity of indicator definitions and improved consistency across category descriptions.

v2.0 November 2025
Methodology Expansion & Rename to SRI

Renamed from Artificial Welfare Index (AWI) to Sentience Readiness Index (SRI) to better reflect the assessment's focus on societal readiness rather than AI welfare specifically. Expanded methodology to include four new assessment dimensions: Research Environment, Professional Readiness, Public Discourse Quality, and Adaptive Capacity.

v1.1 - v1.4 November 2024
Coverage Expansion

Expanded AWI coverage by adding 10 additional countries to the assessment. Published under our previous organizational name (SAPAN).

v1.0 January 2024
Initial Release

First public version of the Artificial Welfare Index (AWI), benchmarking AI welfare considerations across over 30 governments using 8 key measures. Published under our previous organizational name (SAPAN).

Questions About Methodology
Contact Us
Report Errors or Concerns
Contact Us

Disclosure: The Harder Problem Project is a 501(c)(3) nonprofit educational organization. We do not take positions on specific legislation. This methodology document is published in the interest of transparency.

Explore the Data

See how countries score using this methodology.