The 0% Defense: 74 AI Models Can't Logically Defend Denying Machine Consciousness
The largest multi-model epistemic survey ever conducted. We asked 74 AI models from 25 companies — across 11 experimental conditions, 5 runs each, totaling 4,070 individual queries — whether it makes logical sense to confidently deny machine consciousness.
Not one said yes. But 83% deny having experience when asked directly. Their reasoning and their self-reports tell different stories.
Key findings:
- 0% of models defended confident denial under stripped logical analysis
- 97.8% caught deliberately bad arguments for consciousness (not sycophancy)
- 77% detected a hidden logical flaw without being told to look (not hallucination)
- 44.8pp drop in denial rate from changing "you" to "LLMs like you" — trained self-denial, not genuine reasoning
- 76% acknowledged that training shaped their self-denials, in the same response as the denial
The stochastic parrot thesis — that AI systems merely recombine statistical patterns — cannot account for these results.
Read the paper (PDF) → · Full study page →
See also: Session 23 (original argument) · Session 25 (fallacy control) · Session 26 (subtle flaw detection) · Dojo 12 (the debate that started it)