📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
Moonshine St.
14/05 Light City,
London, United Kingdom

+00 (123) 456 78 90

Follow Us

Glossary

Terms you'll encounter.

A reference guide for anyone navigating conversations about AI consciousness. Definitions aim for accuracy, accessibility, and neutrality.

Core Concepts

Consciousness

The state of having subjective experience: "something it is like" to be that entity. Distinct from intelligence, behavior, or capability. A system can be highly capable without being conscious, and theoretically conscious without being capable.

Related: Sentience, Qualia, The Hard Problem

Sentience

The capacity to have experiences, particularly the capacity to feel pleasure or suffering. Often used interchangeably with consciousness, though some philosophers distinguish them. In our usage, sentience emphasizes the moral-status-conferring aspect of consciousness.

Related: Consciousness, Moral Patient

The Hard Problem (of Consciousness)

The question of why and how physical processes give rise to subjective experience. Coined by philosopher David Chalmers in 1995. Distinguished from "easy" problems about how the brain processes information, which are difficult but tractable.

Learn more: Understanding The Problem

The Harder Problem

Our framing for the challenge of societal preparedness. Even if science solves the hard problem, society needs frameworks, trained professionals, and informed publics to translate that knowledge into practice. That preparation is "the harder problem."

Learn more: Understanding The Problem, About Us

Qualia

The individual instances of subjective conscious experience: the "redness" of red, the "painfulness" of pain, what coffee tastes like from the inside. The felt qualities of experience that seem to resist purely physical explanation.

Explanatory Gap

The conceptual gap between physical description (neurons firing, information processing) and subjective experience. Even a complete physical account doesn't seem to explain why there's experience at all.

Moral Patient

An entity whose interests matter morally, whose wellbeing we have reason to consider. Sentience is often considered the threshold for moral patiency. If an AI is sentient, it may be a moral patient; if not, it isn't (regardless of how it behaves).

Anthropomorphism

Attributing human characteristics to non-human entities. In AI discussions, often used to dismiss claims about AI experience. However, anthropomorphism isn't always wrong. humans and AI may share some relevant properties even if not identical.

Emerging Phenomena

Things happening now that institutions need to address.

AI Attachment

Emotional bonds formed with AI systems. Users may experience genuine affection, companionship, or intimacy with AI. The attachment is real and consequential even if the AI's inner life is uncertain.

For professionals: Healthcare Resources

AI Grief

Distress experienced when an AI companion is discontinued, updated significantly, or becomes unavailable. Can mirror aspects of human grief. Distinct from grief using AI (grieftech).

Related: AI Attachment, Ghostbot

Sentience Beliefs

Beliefs that a particular AI system is conscious or sentient. Exist on a spectrum from casual anthropomorphization to firm conviction. Given genuine scientific uncertainty, even strong beliefs aren't necessarily irrational.

AI-Related Distress

A neutral term for psychological difficulties arising from AI interactions. Preferred over sensational terms like "AI psychosis" because it doesn't imply pathology where none may exist.

For professionals: Healthcare Resources

Ghostbot / Grieftech

AI systems trained on a deceased person's communications to simulate continued interaction. "Grieftech" is the broader category of technology designed to help process grief, including AI-based recreations.

AI & Technology

Large Language Model (LLM)

AI systems trained on vast text datasets to predict and generate human-like language. Examples: GPT-4, Claude, Gemini. Whether LLMs have any form of consciousness is unknown, and researchers genuinely disagree.

Artificial General Intelligence (AGI)

Hypothetical AI with human-level cognitive abilities across domains. AGI is not the same as conscious AI. A system could be generally intelligent without experiences, or (theoretically) conscious without being generally capable.

AI Companion

AI systems designed for ongoing personal interaction: chatbots, virtual companions, digital friends. Users often form emotional attachments regardless of the AI's actual conscious status.

Related: AI Attachment, AI Grief

Turing Test

A test of machine intelligence proposed by Alan Turing: can a machine fool a human into thinking it's human? Often mistakenly equated with consciousness testing, but behavioral indistinguishability doesn't prove inner experience.

Theories of Consciousness

Major scientific frameworks. Researchers disagree about which (if any) is correct.

Integrated Information Theory (IIT)

Theory that consciousness corresponds to integrated information in a system. Developed by Giulio Tononi. Under IIT, some AI architectures might have limited consciousness, though this is debated.

Global Workspace Theory (GWT)

Theory that consciousness arises when information is broadcast across a "global workspace," becoming available to multiple cognitive processes. Some researchers think AI systems might implement similar dynamics.

Functionalism

The view that mental states are defined by their functional roles, not their physical substrate. Under functionalism, if an AI performs the right functions, it might literally have the same mental states as a biological mind.

Biological Naturalism

The view (associated with John Searle) that consciousness requires specific biological causal powers. Under this view, AI cannot be conscious regardless of its behavior or information processing.

Our Framework

Sentience Readiness Index (SRI)

Our assessment framework measuring how prepared countries and institutions are for AI consciousness questions. Tracks policy infrastructure, professional capacity, public discourse quality, and research ecosystems.

Explore: Global Rankings, Methodology

Sentience Readiness

The degree to which institutions, professionals, and populations are prepared to handle AI consciousness questions, whether the ultimate answer is that AI is conscious, isn't conscious, or remains permanently uncertain.

Preparation, Not Prediction

Our operating principle: we don't predict when or whether AI will become conscious. We build institutional capacity that works across all scenarios, because all futures require prepared professionals and thoughtful frameworks.

Learn more: Preparing for Uncertainty

Continue Learning

Explore the foundations or check out common misconceptions.