📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Country Profile

🇪🇺 European Union

55

Partial Readiness

Trend

→ Stable

Last Updated

Dec 2025

Executive Summary

Regulatory Infrastructure, Philosophical Foundations

The European Union demonstrates partial readiness for navigating AI sentience questions, with significant strengths in research freedom and institutional capacity but notable gaps in professional preparation and policy frameworks. The comprehensive AI Act provides regulatory infrastructure but does not address consciousness or sentience questions, focusing instead on risk-based safety and human rights protections.

Strong academic traditions in philosophy of mind and consciousness science provide intellectual foundations, with major universities hosting relevant research programs. However, systematic engagement across professional communities—healthcare, legal, education, media—remains limited. Public discourse shows moderate sophistication but consciousness questions remain peripheral to mainstream AI discussions.

The EU’s adaptive capacity is moderate, with established mechanisms for policy updates but slow consensus-based processes. The multi-level governance structure (EU institutions plus 27 member states) creates both opportunities for diverse approaches and challenges for coordinated responses.

Key Findings

  • Research freedom (84/100) is strongest category, with constitutional protections and no restrictions on AI consciousness inquiry
  • Professional readiness (32/100) is weakest area, with minimal preparation across healthcare, legal, education, and media sectors
  • AI Act provides comprehensive regulatory framework but does not address AI consciousness or sentience questions
  • Strong philosophical traditions and academic capacity exist but are not yet systematically focused on AI sentience
  • Adaptive capacity (66/100) is moderate, with established update mechanisms but slow consensus-based processes
  • Public discourse (53/100) shows moderate sophistication but consciousness questions remain niche topics

Analysis

Category Breakdown

Detailed scores across the 6 dimensions of preparedness.

Policy Environment

61 /100
⚡️

Notable: AI Act regulates AI comprehensively but does not address consciousness or sentience questions.

Institutional Engagement

40 /100
⚡️

Notable: European Parliament passed AI ethics resolutions but focused on human rights, not AI sentience.

Research Environment

77 /100
⚡️

Notable: Constitutional protections for academic freedom across most EU member states enable open inquiry.

Professional Readiness

32 /100
⚡️

Notable: No evidence of systematic professional training on AI consciousness across any major profession.

Public Discourse Quality

53 /100
⚡️

Notable: European philosophical traditions provide foundation for nuanced discourse when consciousness is discussed.

Adaptive Capacity

66 /100
⚡️

Notable: AI Act includes review mechanisms and provisions for regulatory adaptation as technology evolves.

Comparison to Global Leaders

How does European Union compare to top-ranked countries in each category?

Category 🇪🇺 European Union 🇳🇴 Norway 🇳🇱 Netherlands Global Avg
Policy Environment 61 63 55 40
Institutional Engagement 40 52 38 23
Research Environment 77 🥇 73 73 52
Professional Readiness 32 44 30 19
Public Discourse Quality 53 58 48 29
Adaptive Capacity 66 75 67 49

Organizations

Key Research Institutions

Organizations contributing to the European Union research environment.

Consciousness, Cognition & Computation Group (CO3), Université Libre de Bruxelles

Brussels, Belgium

Led by Prof. Axel Cleeremans (ERC Advanced Grant recipient), CO3 conducts foundational research on consciousness mechanisms with explicit work on AI consciousness implications and the urgent ethical challenges of potentially creating conscious AI systems.

Visit Website

Leverhulme Centre for the Future of Intelligence, University of Cambridge

Cambridge, England, UK

Interdisciplinary research centre with explicit research programmes on consciousness in AI, algorithmic transparency, and the nature of intelligence, addressing both short-term and long-term implications of AI for consciousness and moral status.

Visit Website

Centre for the Study of Existential Risk (CSER), University of Cambridge

Cambridge, England, UK

Founded by Huw Price, Martin Rees, and Jaan Tallinn to study existential risks from AI, with pioneering work on AI safety that explicitly addresses questions of consciousness, moral patienthood, and the ethical implications of advanced AI systems.

Visit Website

Oxford Uehiro Centre for Practical Ethics, University of Oxford

Oxford, England, UK

Conducts applied ethics research on AI and digital ethics including work on moral status, neuroethics of consciousness, and the ethical implications of AI systems with potential moral patienthood.

Visit Website

Human Brain Project (HBP) / EBRAINS Infrastructure

Multiple EU locations, EU-wide consortium

€600 million EU flagship project (2013-2023) with dedicated research workpackage on 'Networks underlying brain cognition and consciousness,' developing computational models to understand consciousness mechanisms applicable to substrate-independent minds.

Visit Website

AlgorithmWatch

Berlin, Germany

Non-profit research and advocacy organization monitoring algorithmic decision-making and AI ethics, with work on AI rights, human rights implications, and ethical governance frameworks relevant to AI moral status and welfare considerations.

Visit Website

Future of Life Institute - EU Policy Team

Brussels, Belgium

Leading AI safety organization with EU policy presence working on AI Act implementation; while primarily focused on existential safety, their work increasingly intersects with questions of AI consciousness and moral patienthood in advanced systems.

Visit Website

Behind the Scores

Understanding the Data

How do you measure preparedness for something that hasn't happened yet? The Sentience Readiness Index evaluates nations across six carefully constructed dimensions: from policy frameworks and institutional engagement to research capacity and public discourse quality.

📊
Six Dimensions

Each score synthesizes assessments across policy, institutions, research, professions, discourse, and adaptive capacity.

🔬
Evidence-Based

Assessments draw from legislation, academic literature, news archives, and expert consultations.

👥
Human-Reviewed

Every assessment undergoes human verification against documented evidence before publication.

Explore More

Compare European Union to other countries or learn about our assessment methodology.

View All Rankings Read Full Methodology