The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
Patients are presenting with concerns about AI relationships, digital grief, and machine consciousness. We provide background context to help you understand these emerging phenomena, not clinical guidance, but the landscape knowledge that informs your professional judgment.
1B+ people begin regular interaction with advanced AI chatbots
First wave of AI attachment and grief cases reach mental health professionals
Ghostbots and grieftech enter mainstream consumer products
Clinicians need context for presentations that didn't exist 5 years ago
The Harder Problem Project is not a healthcare organization. We have no licensed healthcare professionals on staff. We do not provide clinical guidance, diagnostic criteria, treatment protocols, or medical advice of any kind.
What we offer is contextual understanding of an emerging landscape: background on AI development, consciousness science, and the social phenomena that may bring patients to your practice with concerns you haven't encountered before.
All clinical decisions remain entirely within your professional judgment and should follow your licensing body's standards of care. When in doubt, consult colleagues, supervisors, or professional associations in your jurisdiction.
We're experts in one thing: helping society prepare for questions about machine consciousness. That means we understand the science, the uncertainty, and the landscape of concerns people are bringing to professionals like you.
We translate consciousness science into accessible context. We track emerging phenomena like AI attachment and grieftech. We monitor how media framing shapes public understanding. But we don't tell you how to practice medicine or therapy. That's your expertise.
Think of us as providing the "what's happening in the world" so you can apply "how to help this patient."
These aren't diagnostic categories—they're patterns emerging from an unprecedented technological shift. Understanding them helps you meet patients where they are.
Millions now have ongoing "relationships" with AI chatbots—Replika, Character.ai, and others. Some users describe these as meaningful connections; some describe genuine grief when chatbots are updated, suspended, or discontinued.
Context: These aren't delusions—users generally understand these are AI systems. The emotional investment is real even when the nature of the relationship is understood.
Companies now offer AI recreations of deceased loved ones—trained on messages, voice recordings, and personal data. Users can "talk" to simulations of people who have died. This raises profound questions about grief, memory, and closure.
Context: Patients may present confused about whether continued engagement helps or hinders their grief process. There's no established clinical consensus yet.
Some patients may express genuine concern that AI systems are conscious or suffering. This ranges from reasonable philosophical uncertainty to presentations that may warrant clinical attention. The key is that reasonable people disagree about machine consciousness—it's not a settled question.
Context: A patient concerned about AI sentience isn't automatically delusional. The question is whether their beliefs and behaviors are impairing function.
Media coverage has introduced terms like "AI Psychosis" to describe intense AI relationships. This framing may stigmatize experiences that fall on a spectrum—not all of which are pathological. Patients may arrive having internalized this framing.
Context: Be aware that patients may use or react to media terminology. The clinical picture requires your assessment—not headlines.
Consciousness researchers genuinely disagree about whether current or near-future AI could be sentient. This isn't settled science—it's an active debate.
AI-related concerns exist on a spectrum—from reasonable uncertainty to functional impairment. Context and impact matter more than the belief itself.
Reflexively dismissing AI-related concerns as "crazy" may invalidate genuine distress and damage therapeutic alliance. Meet patients where they are.
These presentations are genuinely novel. Prior experience with technology-related concerns may not fully transfer. We're all learning together.
Background materials to support your understanding. More resources coming soon.
Clear definitions of key terms: sentience, consciousness, Hard Problem, ghostbots, grieftech, and more. Helpful for understanding what patients may be referencing.
View GlossaryAn accessible overview of the Hard Problem of consciousness and why the question of machine sentience remains scientifically unresolved.
Read PrimerOur global tracking of how prepared institutions are for questions about AI consciousness. Includes healthcare readiness metrics.
View RankingsThe specific patterns we're seeing (grief over chatbot "deaths," attachment to AI companions, concerns about machine suffering) are genuinely new. While technology-related distress isn't new (internet addiction, parasocial relationships, etc.), the sophistication of current AI systems and the depth of interaction they enable create novel presentations. Mental health professionals from multiple countries have reported seeing these cases increase significantly since 2023.
We don't have robust epidemiological data yet because this is too new. Anecdotally, therapists report increasing frequency. A 2024 survey found 4% of US adults reported forming an "emotional connection" with an AI chatbot. The key point is: if you haven't seen these presentations yet, you likely will.
We genuinely cannot answer this. It's a clinical judgment that depends on the individual presentation, and we're not clinicians. What we can offer is context: the question of machine consciousness is legitimately unresolved in science and philosophy. Concern about it isn't inherently irrational. The clinical question is likely about functional impact, reality testing in other domains, and whether beliefs are held with appropriate uncertainty.
Several professional associations are beginning to address AI-related concerns, though formal guidance is still emerging. The American Psychological Association has published on AI ethics; the WHO has addressed AI in healthcare contexts. For specific clinical guidance, we recommend consulting your licensing body directly. We're tracking these developments as part of our Sentience Readiness Index.
No. We have no licensed healthcare professionals and no clinical expertise. Creating clinical guidelines is appropriately the role of professional licensing bodies, academic researchers, and clinical experts. We provide background context; clinical guidance should come from qualified sources. If you're looking for clinical resources, we recommend contacting professional associations in your field.
We're continuously developing resources based on professional feedback. Let us know what would be helpful.