Encyclopedia Meltdown
Knowledge-collapse phenomenon when AI takes initiative without human intervention
Encyclopedia Meltdown is a knowledge-collapse failure mode that begins when the initiative and control of writing shift to AI without the Operator's conscious intervention. The term comes from an essay by Sunghyun Cho.
Meltdown typically emerges from three coupled loops:
- Generation → citation: model outputs get ingested back into the corpus and later cited as if they were external facts.
- Linkage → authority: internal links stop functioning as verification pathways and become trust badges, so link density substitutes for accuracy.
- Responsibility → no-agent: when it is unclear who asserted what on what grounds, correction becomes expensive, and the system stabilizes around an "approximately maintained" false steady state.
The result is not just "more errors." Writing loses meaning because there is no longer a reliable responsibility line; hallucinations can self-reinforce; contradictions proliferate; and links repackage contamination as authority. Once the attitude "it must be right because the encyclopedia linked it" takes hold, human contemplation degenerates from verification into mere ratification.
Prevention starts with a first principle: sovereignty over knowledge must remain with the Operator. In Coscientist, that shows up as an epistemic protocol layer and preserving the responsibility line, a Multi-AI Consensus Protocol, and a Dialectical Graph that separates inference from narrative.
Reference: https://cho.sh/r/C36398 (Opens in a New Tab).