All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 3 results for this tag.
A Logic of Inability
This paper introduces a formal logic of inability as a first-class concept, extending Coalition Logic to systematically study what multi-agent coalitions cannot achieve. It establishes the modal properties of this explicit inability operator, highlighting its significance for reasoning about constraints and safety in AI systems.
Human Agency, Causality, and the Human Computer Interface in High-Stakes Artificial Intelligence
The paper argues that high‑stakes AI systems threaten human agency and proposes a Causal‑Agency Framework that embeds causal modeling, uncertainty quantification, and actionable interfaces to restore user control.
From Accuracy to Readiness: Metrics and Benchmarks for Human-AI Decision-Making
This paper introduces a novel measurement framework to evaluate human-AI decision-making, shifting focus from mere model accuracy to the readiness of human-AI teams for safe and effective collaboration. It proposes a taxonomy of metrics and connects them to the Understand–Control–Improve lifecycle to assess calibration, error recovery, and governance in real-world deployments.