Human Agency in AI
Principles and mechanisms for keeping humans in control of AI-assisted knowledge work
Human agency in AI is the cluster of principles and mechanisms that keep the human Operator in control of knowledge work, even when AI performs substantial contemplation labor.
Core Principles
- Cognitive Sovereignty — human control over judgment and verification
- Cognitive Agency Preservation — AI strengthens judgment, does not replace it
- Operator — the human role as sovereign verifier
Mechanisms
- Responsibility Line — tracing who asserted what
- Verification — active checking, not passive acceptance
- Desirable Difficulty in Verification — making verification effortful to maintain engagement
Risks
- Fluency Trap — accepting smooth output without scrutiny
- Deskilling Through AI Delegation — atrophying skills by not using them
- AI-Induced Illusions of Competence — false mastery from AI assistance