AI Summary β’ Published on Apr 29, 2026
Traditional Coalition Logic primarily focuses on what groups of agents, or coalitions, *can* achieve. However, the concept of *inability*βwhat coalitions *cannot* achieveβhas received comparatively little explicit attention, often being treated merely as the negation of ability. This creates an asymmetry despite the pervasive importance of inability in various domains, including philosophy, ethics (e.g., "ought implies can"), and particularly in artificial intelligence and safety-critical multi-agent systems. In these fields, it's crucial to specify not just what agents are instructed not to do, but what they are fundamentally unable to bring about. This paper addresses this gap by proposing a systematic logical investigation into inability as a distinct modal concept.
The authors develop a conservative extension of classical Coalition Logic by introducing an explicit inability operator, denoted as π¨πΊπ»C Ο (read as "coalition C is unable to ensure Ο"). This operator is formally defined as the negation of coalition ability: π¨πΊπ»C Ο β Β¬β¨Cβ© Ο. The resulting logical system, named π’π«π¨πΊπ», is based on a one-step concurrent-game semantics. The paper then formally proves the soundness, completeness, and conservativity of π’π«π¨πΊπ» with respect to standard Coalition Logic. This conservativity demonstrates that while the new operator does not increase expressive power, its explicit introduction aims to reorient the logical perspective towards inability and facilitate a systematic study of its structural properties.
The study reveals a coherent and distinct modal profile for the inability operator. Key structural laws are established: inability is anti-monotonic with respect to coalition inclusion, meaning if a larger coalition cannot ensure a goal, no smaller subcoalition can. It is also contravariant with respect to goal strength, implying that if a coalition cannot ensure a weaker goal, it certainly cannot ensure a stronger one. The paper demonstrates an asymmetric interaction with Boolean connectives, showing that valid distribution principles for conjunction and disjunction hold only in one direction. Critically, inability does not generally satisfy superadditivity, meaning that the combined inability of disjoint coalitions to achieve separate goals does not necessarily imply their combined inability to achieve the conjunction of those goals. Furthermore, the paper formally distinguishes inability of a coalition C to ensure Ο from the ability of its complementary coalition CΜ to ensure Β¬Ο, highlighting that these are not equivalent in general. Finally, boundary cases for the empty and grand coalitions yield exact dualities, linking grand coalition inability to systemic impossibility and empty coalition inability to strategic contingency.
Making inability an explicit, first-class concept, even as a definitional extension, offers significant conceptual benefits. It provides a more natural and direct language for expressing and reasoning about the boundaries of agency, negative capabilities, and systemic impossibilities in multi-agent systems. This is particularly valuable in contexts like AI safety, where requirements often specify what a system *cannot* force (e.g., π¨πΊπ»H ππΊππ, meaning the system cannot guarantee harm), and in protocol verification, where security relies on adversarial coalitions being unable to achieve certain outcomes. The findings lay a foundational groundwork for future research into richer notions of inability, such as epistemic, resource-bounded, or dynamic inability, suggesting that further independent semantic treatments may be warranted in more complex settings. By systematically identifying where agency stops and constraints begin, this logic provides a crucial toolkit for analyzing the limits of strategic power.