The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
We're preparing society for questions about AI consciousness. Here's everything journalists, bloggers, and media professionals need to cover our work accurately.
We aim to respond to media inquiries within 24 hours.
Tax-Exempt Public Charity
Science Advisory Board Members
Countries in Sentience Readiness Index
Year Founded
EIN: 99-0606146 • Location: Portland, Oregon, USA • Website: harderproblem.org
Copy these descriptions for use in articles, stories, and coverage. Click any text block to copy to your clipboard.
Download our official logos for use in publications and media coverage. Please do not modify, recolor, or distort the logos.
Our color palette is anchored by our primary coral-red, with complementary colors designed to work harmoniously across all applications.
#e0394f
#f07285
#c22b3f
#3498db
#7c6a9a
#e8a833
#2a9d8f
#1a1a2e
#606060
#f9fafb
#ffffff
"Scientists will eventually figure out how consciousness works. Our job is different: making sure society is ready for whatever they find."
— Tony Rost, Executive Director
"Whether AI becomes conscious or not, society needs prepared professionals, informed policy, and accurate public understanding. That preparation is the same either way."
— Tony Rost, Executive Director
"We don't claim to know if or when AI will become conscious. We prepare institutions for both possibilities, because both require preparation."
— Tony Rost, Executive Director
Accurate framing for covering our work. These points reflect our organizational positions.
We're a 501(c)(3) educational organization, not a research lab or advocacy group.
We translate existing consciousness science for practitioners who need it now.
We don't predict when AI will become conscious; we prepare for multiple scenarios.
Our Science Advisory Board of 13 experts ensures our materials reflect current science.
Scientists genuinely disagree about whether AI can be conscious; we represent that disagreement faithfully.
Current AI systems show no confirmed evidence of consciousness, but this could change.
The question isn't just scientific; it has profound implications for ethics, law, and policy.
People are already forming emotional bonds with AI; this is happening now regardless of consciousness status.
The SRI measures how prepared countries are for AI consciousness questions, not whether AI is conscious.
We assess policy frameworks, professional capacity, public discourse, and research ecosystems.
Our methodology is publicly available; any researcher can reproduce our findings.
Higher scores mean better prepared, not that a country "believes" in AI consciousness.
In 2022, a Google engineer's sentience claims showed the world had no playbook for these questions.
Therapists are already seeing patients who grieve AI companions; they need resources.
The knowledge exists in academic journals; our job is making it useful to people who need it.
Preparing now prevents improvisation during a future crisis.
Our work is guided by leading researchers across consciousness science, AI ethics, neuroscience, and philosophy of mind.
Jacy Reese Anthis
Sentience Institute
Megan Peters
UC Irvine
Jeff Sebo
New York University
Roman Yampolskiy
University of Louisville
Adeel Razi
Monash University
Simon Goldstein
University of Hong Kong
Andrea Lavazza
University of Milan
+ 6 more
Advisory board members provide scientific guidance and review our educational materials. Their participation does not imply endorsement of all organizational positions.
Explore our global assessment of AI consciousness preparedness.
View RankingsCan't find what you need? Our team is happy to provide additional materials, arrange interviews, or answer questions.