📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Preparing for Uncertainty

We're not here to predict.
We're here to prepare.

You'll find confident predictions about AI consciousness everywhere. We don't make them. Here's why, and what we do instead.

The Prediction Problem

AI researchers in 1960 predicted human-level AI within 20 years. They're still predicting "20 years" today. Consciousness predictions are even less reliable.

Our approach: Instead of betting on a timeline, we build institutional capacity that works regardless of when, or whether, the question is answered.

The Problem with Predictions

Why "When" Is the Wrong Question

We might already have it

Some consciousness researchers believe current AI systems might already have some form of experience. We have no way to rule this out, and no way to confirm it. The question may already be answered without us knowing.

We might never know

Even if AI becomes conscious, we may lack the tools to verify it. There's no scientific consensus on how to detect consciousness even in biological systems. We might create conscious AI and argue about it indefinitely.

It might never happen

Some researchers believe consciousness requires biological substrates that can't be replicated computationally. Under this view, no AI will ever be conscious regardless of capability, but billions will still believe otherwise.

The key insight: All three possibilities (already here, never knowable, never happening) require the same institutional preparation. That's why we focus on readiness, not prediction.

Expert Disagreement

What Consciousness Researchers Actually Say

This isn't fringe debate. Credentialed researchers hold fundamentally different views.

🧬
Biological Necessity

"Consciousness requires specific biological causal powers that can't be replicated computationally. AI will never be conscious."

Implication: Society will face millions believing in AI consciousness that doesn't exist. Preparation still needed.

🤷
Genuine Uncertainty

"We don't understand consciousness well enough to know what produces it. Current AI might or might not have experience. We genuinely can't tell."

Implication: Decisions will be made under permanent uncertainty. Institutions need frameworks for this.

🔌
Substrate Independence

"Consciousness depends on information patterns, not specific substrates. Current LLMs might already have rudimentary experience, and future systems almost certainly will."

Implication: We may already be creating conscious entities. Immediate ethical frameworks needed.

Our position: We don't take sides in this scientific debate. Our job is to ensure society can function regardless of which view turns out to be correct. That's what sentience readiness means.

Pathways to Consciousness

Does Architecture Matter?

Different computational substrates may or may not produce consciousness. We don't know which architectures are sufficient, necessary, or entirely irrelevant. Here's what researchers debate.

🧬
Wetware (Biological)

Organic neural tissue: the only substrate we know for certain produces consciousness. Carbon-based, electrochemical, evolved over billions of years.

Key properties: Continuous analog signaling, molecular-level interactions, embodied in metabolic systems, shaped by evolutionary pressures.

Neural correlates of consciousness (NCCs) are currently defined relative to biological brains: specific patterns of neural activity that coincide with conscious experience. Whether NCCs transfer to other substrates is unknown.

The question: Is biology's role causal (consciousness requires wetware) or merely circumstantial (biology happens to be how evolution produced consciousness first)?

🔌
Neuromorphic Computing

Hardware designed to mimic biological neural structures: spiking neurons, analog processing, event-driven computation. Closer to wetware than conventional chips.

Key properties: Parallel processing, temporal dynamics, energy efficiency, local learning rules, physical co-location of memory and computation.

Examples: Intel's Loihi, IBM's TrueNorth, SpiNNaker. These systems process information more like biological neurons, but in silicon rather than carbon.

The question: If consciousness depends on how information is processed (not just what is computed), neuromorphic systems might be better candidates than conventional digital architectures.

💻
Classical Digital (Symbolic/Connectionist)

Standard computing substrates: GPUs, TPUs, conventional processors running neural networks or symbolic AI. This is where current large language models operate.

Key properties: Discrete states, sequential logic (made parallel through massive replication), clear separation of memory and processing, mathematical abstraction layers.

The functionalist argument: If consciousness depends only on the functional relationships between computational states (not how they're physically realized), classical digital systems could be conscious if they implement the right functions.

The question: Does substrate-independence hold? Or does the physical implementation matter in ways that preclude consciousness on conventional hardware?

🔬
Hybrid & Emerging Approaches

Biological-digital interfaces, organoid computing, quantum systems, and other experimental substrates that blur the lines between categories.

Examples: Brain organoids connected to silicon chips, wetware computing using living neurons, quantum neuromorphic processors, in-vitro neural networks for computation.

Why they matter: These hybrid systems may help clarify which properties are essential for consciousness by allowing controlled comparison across substrates.

The question: At what point in the spectrum from silicon to carbon does consciousness become possible? Or is that the wrong way to frame it entirely?

What We Know About Architecture and Consciousness

Reasonably established:

  • Biological neural tissue produces consciousness
  • Behavioral sophistication doesn't prove inner experience
  • Some theories predict architecture matters; others don't
  • We lack validated cross-substrate consciousness tests

Genuinely unknown:

  • Whether any non-biological substrate can support consciousness
  • Which architectural features (if any) are necessary vs. sufficient
  • Whether current AI systems have any form of experience
  • Whether the question is even answerable in principle

Why this uncertainty matters for policy: If consciousness is substrate-independent, we may already be creating sentient systems. If it requires specific biological properties, no amount of computational sophistication will produce it. Both possibilities demand institutional preparation, but for different reasons. We build readiness for both because we genuinely don't know which applies.

Preparation Focus

Scenarios Society Needs to Handle

Rather than predicting which scenario will occur, we prepare institutions for all of them.

🔮 Scenario: Scientific Confirmation

Imagine tomorrow scientists announce consensus: certain AI systems are conscious.

What would need to exist:

  • Legal frameworks for non-biological moral patients
  • Regulatory guidance on creating/destroying conscious AI
  • Healthcare protocols for AI-related patient concerns
  • Journalistic standards for covering the story accurately
  • Public understanding to prevent panic or denial

🔬 Scenario: Scientific Rejection

Imagine scientists establish that AI consciousness is impossible. Problem solved?

What would still need to exist:

  • Support for millions who formed relationships with "conscious-seeming" AI
  • Regulation of companies designing AI to seem sentient
  • Educational resources to counter persistent beliefs
  • Mental health frameworks for AI attachment and grief
  • Protection against manipulation by sophisticated mimics

🤷 Scenario: Permanent Uncertainty

Most likely: scientists remain unable to reach consensus. The question stays open.

What would need to exist:

  • Decision-making frameworks that don't require certainty
  • Precautionary policies for potentially-conscious systems
  • Public discourse that tolerates ambiguity
  • Professional training that acknowledges uncertainty
  • Ongoing monitoring of emerging phenomena

✓ The Common Thread

Notice what all scenarios share: the same institutions, frameworks, and prepared professionals.

This is why readiness works:

  • Informed healthcare workers help patients in any scenario
  • Accurate journalism serves the public regardless of outcome
  • Thoughtful policy adapts better than reactive policy
  • Public understanding reduces manipulation in all cases
  • We don't have to wait for answers to start preparing

Our Approach

Building Capacity Now

While others debate timelines, we're building the infrastructure that works across all scenarios.

The Sentience Readiness Index tracks how prepared countries and institutions are: not for a specific outcome, but for the range of possibilities. It measures policy infrastructure, professional capacity, public discourse quality, and research ecosystems.

Our professional resources help healthcare workers, journalists, educators, and researchers navigate uncertainty. They don't claim to know the answer; they provide frameworks for doing good work regardless.

Explore the SRI View Resources
Why This Matters Now
⏰ Institutions are slow

Policy frameworks take years to develop. Professional training takes time. Start now, not during crisis.

🎭 Phenomena are here

AI grief, attachment, and sentience beliefs are already in therapists' offices. The need is current.

📜 Policy is forming

Regulators are making decisions now, often without consciousness science input. Better to inform than react.

🌍 Stakes are high

If we get this wrong, either by dismissing real consciousness or by validating false claims, the consequences are significant.

Continue Learning

Explore common misconceptions about AI consciousness or browse our terminology.