Cognitive Agency Preservation
AI safety framing that keeps humans in control of judgment
Cognitive agency preservation is the principle that AI should strengthen human judgment, not replace it. A system preserves agency when it keeps the user in the loop as an active critic: it shows its grounds, surfaces uncertainty, and makes disagreement cheap.
In Coscientist, this implies traceable claims, explicit rebuttal paths, and a human veto. It is tightly linked to cognitive sovereignty: the right and obligation to own verification.