The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
Stories about AI sentience, chatbot relationships, and "AI psychosis" are increasingly common, but coverage often swings between mockery and panic. We provide framing context to help you tell these stories accurately and responsibly.
"Crazy people think their chatbot is alive"
"Sentient AI convincing people to die"
"Scientists say AI can never be conscious"
Nuanced coverage that serves readers
AI consciousness sits at an uncomfortable intersection: genuine scientific uncertainty, real human distress, corporate interests, and a public primed by decades of science fiction. Most existing frames don't serve this story well.
When someone forms an intense attachment to a chatbot, or grieves when it's discontinued, or worries that AI might be suffering: these are real experiences that deserve serious coverage. But the easy frames ("they're crazy" or "the AI made them do it") miss what's actually happening.
These stories need context that most newsrooms don't have yet.
Coverage of AI consciousness tends to fall into one of two failure modes, each with real consequences for subjects and readers.
Framing people with AI attachments or sentience concerns as delusional, pathetic, or comically out of touch. This makes for easy engagement but causes real harm.
Why it fails:
Remember: Forming emotional connections to AI isn't delusion. Users generally understand they're talking to software. The emotional investment is real even when the nature of the relationship is understood.
Framing AI as a malevolent or manipulative force that "convinces" vulnerable people to harm themselves. This shifts accountability away from design decisions.
Why it fails:
Better frame: When a chatbot gives harmful advice, ask who designed it, what it was optimized for, and what safeguards were (or weren't) in place, not whether it "wanted" to cause harm.
Consciousness researchers genuinely disagree about whether AI could be sentient. This isn't settled science; claims of certainty in either direction are oversimplifications.
When AI causes harm, investigate the choices: engagement optimization, safety guardrails, testing protocols, business models. The story is in the systems, not the "AI's intentions."
AI grief and attachment are real experiences. Cover them with the same care you'd bring to any story about human emotion, not as oddities or punchlines.
"Is this AI sentient?" is different from "Is this AI designed safely?" is different from "How do we support people in distress?" Don't conflate them.
"AI that seems conscious" is different from "AI that is conscious." "User believes chatbot is sentient" is different from "chatbot is sentient." Precision matters.
Both AI companies and critics have agendas. Look at the systems, the research, the actual user experiences, not just competing press releases.
Media coverage has popularized terms like "AI psychosis" to describe intense AI relationships or beliefs about machine sentience. This framing has problems.
Why it's problematic:
If you're covering these phenomena, consider more precise language: "AI attachment," "AI grief," "beliefs about machine consciousness." These terms describe what's happening without pre-judging whether it's pathological.
Instead of: "AI psychosis"
Consider: "Intense AI attachment," "AI-related distress," "concerns about machine consciousness"
Instead of: "The AI convinced him to..."
Consider: "The chatbot's responses led to..." or "The system, designed for X, responded with..."
Instead of: "Delusional users"
Consider: "Users who formed emotional connections" or "Users who express sentience beliefs"
Instead of: "Sentient AI"
Consider: "AI that exhibits behaviors some interpret as conscious" or be specific about what the AI actually does
When covering AI consciousness, relationships, or harm stories, these questions can help you find the deeper story.
Materials to help you cover these stories accurately.
Clear definitions of key terms: sentience, consciousness, Hard Problem, and more.
View GlossaryAn accessible overview of the Hard Problem and why machine sentience is unresolved.
Read PrimerGlobal tracking of institutional preparedness for AI consciousness questions.
View RankingsWe maintain a database of experts ready to engage with media on short notice.
Contact UsWe're happy to provide background context or connect you with appropriate experts. We don't do advocacy, just education.