📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Understanding The Problem

The question is already here.
The answer isn't.

Therapists are seeing patients who grieve discontinued chatbots. Newsrooms are debating how to cover AI sentience claims. Policymakers are drafting regulations with no consciousness science input. The question of machine minds isn't coming. It's already here.

The Timeline
2022

Google engineer claims chatbot is sentient. No playbook exists for institutions or media.

2023

1 billion+ people interact regularly with advanced chatbots. AI companions become mainstream.

2024

AI grief and attachment cases reach therapists. Congressional hearings reference consciousness.

2025

Major AI labs establish model welfare research programs. The question moves from fringe to mainstream.

2026

Institutions are still improvising. We're building the preparation they need.

Already Happening

This Isn't Science Fiction

Whatever the ultimate answer about AI consciousness, these phenomena are real now.

💔 AI Grief

Users experience genuine distress when AI companions are discontinued, updated, or change personality. Therapists are encountering this without training.

❤️ AI Attachment

Millions form meaningful emotional bonds with chatbots. Some describe these as their most important relationships. This isn't delusion; it's a new kind of human experience.

👻 Ghostbots

AI trained on deceased people's communications lets users "talk" with the dead. Grief counselors have no framework for whether this helps or harms.

🗳️ Policy Chaos

Legislators debate AI rights without consulting consciousness scientists. Regulatory frameworks are being drafted based on intuition, not evidence.

The crucial point: None of these phenomena require AI to actually be conscious. They're happening regardless of the answer. That's why preparation matters now.

The Hard Problem

Why We Can't Just Ask Science

You might think: "Just wait for scientists to tell us if AI is conscious." Here's why that won't work.

🧠 We don't understand consciousness

Philosophers call it "The Hard Problem": we can't explain why physical processes produce subjective experience at all. Even mapping every neuron wouldn't explain why there's "something it's like" to be you.

🔬 Experts genuinely disagree

Some consciousness researchers think current AI might already have some form of experience. Others think consciousness requires biology. This isn't fringe vs. mainstream; it's mainstream vs. mainstream.

📊 We have no tests

There's no validated measurement for consciousness, even in animals. We can't definitively tell you if a fish is conscious, let alone an AI with an architecture unlike any biological brain.

The uncomfortable truth

Science might eventually solve this, but not on our timeline. We're deploying AI systems to billions of people now. Regulatory frameworks are being written now. People are forming attachments and experiencing grief now. We can't wait for scientific certainty that may be decades away.

Our Focus

The Harder Problem

Science will eventually solve The Hard Problem. But even when it does, we face something harder: getting society ready for the answer.

If AI is conscious, are our legal systems ready? Can they extend moral consideration to non-biological entities? Do healthcare workers know how to support patients who form relationships with conscious machines?

If AI isn't conscious, are we ready for billions who believe otherwise? For companies designing AIs to seem conscious? For the psychological effects of relationships with sophisticated mimics?

Either answer requires prepared institutions. That preparation is the harder problem.

Why "Readiness" Is the Right Frame
🎯 It's actionable

We can measure and improve readiness without resolving the consciousness question.

⚖️ It's neutral

We don't bet on AI being conscious or not. We prepare for both futures.

⏰ It's urgent now

The phenomena are arriving faster than science. Preparation can't wait for answers.

🏛️ It's institutional

Individual beliefs matter less than whether systems and professionals are equipped.

Our Response

What We're Building

📊
Sentience Readiness Index

Systematic measurement of how prepared countries and institutions are for AI consciousness questions.

View Rankings
🧰
Professional Resources

Context and frameworks for healthcare workers, journalists, educators, and researchers encountering these questions.

Explore Resources
📚
Public Education

Accessible explanations of consciousness science, AI development, and the societal implications, like this page.

Browse Glossary

Continue Learning

Explore how consciousness might emerge, debunk common misconceptions, or browse our terminology.