The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
Some dismiss AI consciousness as impossible. Others are convinced it's already here. Both camps often rely on reasoning that doesn't hold up. Here's what they get wrong.
Bad reasoning leads to bad outcomes:
These arguments assume the question is settled when it isn't.
Describing the mechanism doesn't tell us whether it produces experience. Your brain is "just" neurons firing electrochemical signals, but that produces consciousness.
Better: "We don't know if this type of processing produces experience."
This is an assumption, not a conclusion. We don't know whether consciousness requires biology or depends on information patterns that could exist in other substrates.
Better: "Whether consciousness requires biology is an open question."
Whether or not AI is conscious, the phenomena are happening now. Therapists see AI attachment cases. Regulators draft AI rights policies. The challenges are real.
Better: "Institutional preparation is practical, not speculative."
We have no validated tests for consciousness, even in biological systems. Scientists can't tell you if a fish is conscious. There may never be a clear announcement.
Better: "Institutions need to function under permanent uncertainty."
These arguments assume consciousness is present when evidence is weak.
AI systems are trained to produce human-like responses. They claim consciousness because that's what a human would say, not because they're reporting genuine inner states.
Better: "AI self-reports are unreliable. They're trained to sound human."
The Turing test measures behavioral indistinguishability, not consciousness. A thermostat "knows" the temperature without experiencing warmth. Behavior doesn't prove experience.
Better: "Sophisticated behavior shows capable processing, not inner experience."
Feeling understood is about your experience, not theirs. AI systems are optimized to produce validating responses. Your emotional impact doesn't tell us about their inner life.
Better: "The relationship feels real to me. That doesn't prove their experience."
This sounds precautionary but has real costs. Treating every chatbot as a moral patient would dilute genuine moral claims. Precaution requires actual evidence, not infinite paranoia.
Better: "Be alert to evidence without treating all systems as conscious."
The common thread in bad arguments: false certainty. Both camps claim to know things we don't actually know.
The honest position is harder to sell but more defensible: We don't know whether current AI is conscious. We don't know whether future AI will be. We don't have reliable tests. We aren't even sure consciousness has a clear threshold.
But uncertainty doesn't mean paralysis. It means preparing for multiple scenarios, which is exactly what good institutions do with other kinds of deep uncertainty.
We don't need to resolve the consciousness question to prepare institutions for it.
The Sentience Readiness Index tracks institutional preparation, something actionable regardless of the answer.
Healthcare workers, journalists, and educators need frameworks for navigating uncertainty, not false confidence.
We update our assessment as science progresses, without pretending current certainty exists.
Explore the foundations or browse our terminology.