The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
Your students will live in a world where AI systems seem conscious, and might be. They'll need frameworks for thinking about machine minds, skills to navigate uncertainty, and the capacity for ethical reasoning when science doesn't provide clear answers.
Students need to evaluate claims about AI consciousness, not just accept or dismiss them
The Hard Problem of consciousness is real philosophy they'll encounter in the news
What we owe to potentially-conscious entities is a question they'll face as adults
Learning to act thoughtfully when definitive answers don't exist
Teaching about AI consciousness isn't like teaching other topics. The usual approaches may not work, and may even backfire.
Unlike most topics, the question "Is this AI conscious?" doesn't have an accepted answer. Leading researchers genuinely disagree. You can't just look up the answer because there isn't one.
The opportunity: This is a chance to teach intellectual humility, the nature of scientific uncertainty, and how to reason under uncertainty. These skills transfer far beyond AI.
Students may have strong feelings. Some already have AI companions they care about. Others may find the topic distressing or ridiculous. Religious and philosophical worldviews intersect here.
The opportunity: Model how to discuss emotionally charged topics with rigor and respect. Create space for different perspectives while maintaining intellectual standards.
AI capabilities change faster than curricula can adapt. What seemed like science fiction last year is being discussed in congressional hearings this year. Static lesson plans may be outdated before you teach them.
The opportunity: Teach frameworks and thinking skills rather than facts that will change. Use current events as living case studies.
Many students use AI daily. Some have formed attachments to chatbots. They're not coming to this topic fresh. They have experiences and opinions already. You're meeting them where they are.
The opportunity: Draw on student experience. They may have insights you don't. Make the classroom a space to process and contextualize what they're already encountering.
We don't know if AI will become truly sentient. Students need to be prepared for either outcome, and for the long period of uncertainty in between.
Even if AI never achieves genuine consciousness, students will need to:
Key skill: Understanding the difference between behavioral sophistication and genuine experience.
If AI does achieve genuine consciousness, students will need to:
Key skill: Ethical reasoning when the stakes are high and the science is uncertain.
The common thread: Both futures require critical thinking, comfort with uncertainty, philosophical literacy, and ethical reasoning. These are the skills worth teaching, regardless of which future arrives.
Have students argue different positions: the AI is conscious, isn't conscious, we can't know. Build empathy for positions they don't hold.
Examine real claims about AI consciousness. What evidence is offered? What would we need to know? Who benefits from different conclusions?
How have we expanded moral circles before? Animals, children, different groups of humans. What can history teach us about this process?
Analyze news coverage of AI sentience claims. How do headlines differ from content? What's sensationalized? What's omitted?
Present scenarios: Should we delete an AI that says it doesn't want to be deleted? What if we're 10% sure it's conscious? 50%? 90%?
Introduce core concepts: qualia, the Hard Problem, functionalism, zombies. These aren't just abstract; they matter for real decisions.
Focus on foundational concepts:
Introduce complexity:
Engage with full complexity:
A student reveals they care deeply about their AI companion, or are grieving one that was discontinued. How do you respond?
A student says their faith teaches that only humans have souls/consciousness, or that machines could never be conscious.
Some students think AI consciousness is obvious; others think it's ridiculous. Discussion becomes heated rather than productive.
A student becomes anxious or upset, worried about AI suffering, or troubled by the uncertainty of it all.
Materials to support your classroom discussions.
Student-accessible definitions of key terms: sentience, consciousness, the Hard Problem.
View GlossaryAn accessible overview of the Hard Problem suitable for high school and above.
Read PrimerReal-world data on global preparedness. Great for civics and current events discussions.
View RankingsReady-to-use discussion questions and activities for different grade levels.
View GuidesWe're developing discussion guides, lesson plans, and activities. Tell us what you need.