AI Summary • Published on Apr 13, 2026
Current AI ethics discussions focus on trustworthiness, yet high‑stakes AI systems suffer from a deeper human‑computer interaction failure: loss of human agency. Historical accidents such as Therac‑25, CFIT crashes, and alert‑fatigue in EHRs illustrate how ambiguous interfaces can cause catastrophic outcomes by obscuring causal links between a user’s action and system behavior.
The paper proposes a Causal‑Agency Framework (CAF) that integrates structural causal models, rigorous uncertainty quantification, and a user‑facing layer that translates these signals into actionable explanations. CAF is organized as three nested layers: a causal‑uncertainty engine, an explanation‑and‑interpretation module, and a human‑centered agency interface (HCAI) that offers clear affordances for intervention, adaptive explanations, and closed‑loop performance evaluation.
While the paper is conceptual, it surveys existing evidence that correlational XAI methods (e.g., LIME, SHAP) fail to convey causality and uncertainty, leading to “double uncertainty” for users. It cites recent works on uncertainty‑aware trees (UbiQTree) and meta‑analyses showing human‑AI teams often underperform when explanations lack causal grounding, supporting the need for CAF’s design principles.
Adopting CAF shifts evaluation from trust metrics to joint system performance, requiring new data collection strategies, training for domain experts, and redesign of high‑stakes interfaces as interactive cockpits rather than passive dashboards. The framework aims to restore human causal control, reduce automation bias, and prevent future interface‑driven disasters in AI‑enabled domains.