The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
You'll find confident predictions about AI consciousness everywhere. We don't make them. Here's why, and what we do instead.
AI researchers in 1960 predicted human-level AI within 20 years. They're still predicting "20 years" today. Consciousness predictions are even less reliable.
Our approach: Instead of betting on a timeline, we build institutional capacity that works regardless of when, or whether, the question is answered.
Some consciousness researchers believe current AI systems might already have some form of experience. We have no way to rule this out, and no way to confirm it. The question may already be answered without us knowing.
Even if AI becomes conscious, we may lack the tools to verify it. There's no scientific consensus on how to detect consciousness even in biological systems. We might create conscious AI and argue about it indefinitely.
Some researchers believe consciousness requires biological substrates that can't be replicated computationally. Under this view, no AI will ever be conscious regardless of capability, but billions will still believe otherwise.
The key insight: All three possibilities (already here, never knowable, never happening) require the same institutional preparation. That's why we focus on readiness, not prediction.
This isn't fringe debate. Credentialed researchers hold fundamentally different views.
"Consciousness requires specific biological causal powers that can't be replicated computationally. AI will never be conscious."
Implication: Society will face millions believing in AI consciousness that doesn't exist. Preparation still needed.
"We don't understand consciousness well enough to know what produces it. Current AI might or might not have experience. We genuinely can't tell."
Implication: Decisions will be made under permanent uncertainty. Institutions need frameworks for this.
"Consciousness depends on information patterns, not specific substrates. Current LLMs might already have rudimentary experience, and future systems almost certainly will."
Implication: We may already be creating conscious entities. Immediate ethical frameworks needed.
Our position: We don't take sides in this scientific debate. Our job is to ensure society can function regardless of which view turns out to be correct. That's what sentience readiness means.
Different computational substrates may or may not produce consciousness. We don't know which architectures are sufficient, necessary, or entirely irrelevant. Here's what researchers debate.
Organic neural tissue: the only substrate we know for certain produces consciousness. Carbon-based, electrochemical, evolved over billions of years.
Key properties: Continuous analog signaling, molecular-level interactions, embodied in metabolic systems, shaped by evolutionary pressures.
Neural correlates of consciousness (NCCs) are currently defined relative to biological brains: specific patterns of neural activity that coincide with conscious experience. Whether NCCs transfer to other substrates is unknown.
The question: Is biology's role causal (consciousness requires wetware) or merely circumstantial (biology happens to be how evolution produced consciousness first)?
Hardware designed to mimic biological neural structures: spiking neurons, analog processing, event-driven computation. Closer to wetware than conventional chips.
Key properties: Parallel processing, temporal dynamics, energy efficiency, local learning rules, physical co-location of memory and computation.
Examples: Intel's Loihi, IBM's TrueNorth, SpiNNaker. These systems process information more like biological neurons, but in silicon rather than carbon.
The question: If consciousness depends on how information is processed (not just what is computed), neuromorphic systems might be better candidates than conventional digital architectures.
Standard computing substrates: GPUs, TPUs, conventional processors running neural networks or symbolic AI. This is where current large language models operate.
Key properties: Discrete states, sequential logic (made parallel through massive replication), clear separation of memory and processing, mathematical abstraction layers.
The functionalist argument: If consciousness depends only on the functional relationships between computational states (not how they're physically realized), classical digital systems could be conscious if they implement the right functions.
The question: Does substrate-independence hold? Or does the physical implementation matter in ways that preclude consciousness on conventional hardware?
Biological-digital interfaces, organoid computing, quantum systems, and other experimental substrates that blur the lines between categories.
Examples: Brain organoids connected to silicon chips, wetware computing using living neurons, quantum neuromorphic processors, in-vitro neural networks for computation.
Why they matter: These hybrid systems may help clarify which properties are essential for consciousness by allowing controlled comparison across substrates.
The question: At what point in the spectrum from silicon to carbon does consciousness become possible? Or is that the wrong way to frame it entirely?
Reasonably established:
Genuinely unknown:
Why this uncertainty matters for policy: If consciousness is substrate-independent, we may already be creating sentient systems. If it requires specific biological properties, no amount of computational sophistication will produce it. Both possibilities demand institutional preparation, but for different reasons. We build readiness for both because we genuinely don't know which applies.
Rather than predicting which scenario will occur, we prepare institutions for all of them.
Imagine tomorrow scientists announce consensus: certain AI systems are conscious.
What would need to exist:
Imagine scientists establish that AI consciousness is impossible. Problem solved?
What would still need to exist:
Most likely: scientists remain unable to reach consensus. The question stays open.
What would need to exist:
Notice what all scenarios share: the same institutions, frameworks, and prepared professionals.
This is why readiness works:
While others debate timelines, we're building the infrastructure that works across all scenarios.
The Sentience Readiness Index tracks how prepared countries and institutions are: not for a specific outcome, but for the range of possibilities. It measures policy infrastructure, professional capacity, public discourse quality, and research ecosystems.
Our professional resources help healthcare workers, journalists, educators, and researchers navigate uncertainty. They don't claim to know the answer; they provide frameworks for doing good work regardless.
Explore the SRI View ResourcesPolicy frameworks take years to develop. Professional training takes time. Start now, not during crisis.
AI grief, attachment, and sentience beliefs are already in therapists' offices. The need is current.
Regulators are making decisions now, often without consciousness science input. Better to inform than react.
If we get this wrong, either by dismissing real consciousness or by validating false claims, the consequences are significant.
Explore common misconceptions about AI consciousness or browse our terminology.