AI Summary • Published on Apr 14, 2026
The concept of "representation" is fundamental to understanding biological and artificial intelligence, as well as the philosophy of mind. However, its definition and application vary significantly across neuroscience, computer science, and philosophy. A recurring but inconsistently defined theme is that representations must be "useful" or "usable" for an agent. This interdisciplinary divergence leads to misunderstandings, ambiguous scientific discourse, and difficulties in comparing research across fields. The paper aims to clarify these disparate conceptualizations by focusing on how the "usefulness" of representations is understood and integrated into different representational theories.
The authors conducted an interdisciplinary review and discussion to synthesize ideas about representation from philosophy, neuroscience, and computer science, focusing on the notions of "use" and "usability." They identified four key aspects: an internal state (1) carrying information, (2) that information being useful, (3) the information being in a usable format, and (4) the state being actually used by the system. Based on these aspects, they developed a three-level framework to organize existing perspectives on representations:
The framework also considers different "users" of representations—the whole agent, an internal subsystem, or a third party (like a scientist)—to further delineate the context of representation attribution. An informal poll at the CCN 2024 conference provided preliminary empirical insight into how researchers perceive and apply these different levels in their work.
The paper successfully categorizes the diverse notions of "representation" into a coherent three-level framework. Level 1, "Representations as Information," is the broadest, focusing on statistical dependencies between internal states and the world, common in machine learning. It allows for studying mathematical properties without strict constraints on utility or format but struggles with concepts like misrepresentation. Level 2, "Representations as Usable," introduces the concepts of usefulness (task-relevance, epistemic value, optimality) and usable format (e.g., linear decodability, disentanglement, invariance). This level enables "how possibly" explanations of system function, allows for misrepresentation, and supports modular analyses. Level 3, "Representations as Used," emphasizes the causal role of internal states in producing behavior or influencing downstream subsystems, providing "how actually" explanations. This level is crucial for understanding observed errors and the actual impact of representations but faces challenges in establishing causality and specifying precise content/format. The informal poll results demonstrated the framework's relevance by revealing how researchers implicitly operate at different levels when thinking about and publishing on representations.
This framework offers significant implications for understanding and conducting research across philosophy, neuroscience, and computer science. It clarifies long-standing debates, such as representationalism and realism, by highlighting that "representation" serves various scientific purposes and has multiple valid conceptualizations. For neuroscience, it reveals potential blind spots in current methodologies, particularly the need for more tools to investigate how representations are *actually used* (Level 3), rather than just being informative or usable (Levels 1 and 2). For computer science, it encourages a more precise articulation of which concept of representation is being employed, especially in representation learning and mechanistic interpretability. Overall, the framework underscores the value of interdisciplinary dialogue, reminding researchers that their conclusions about representations are deeply intertwined with their assumptions about a system's goals and capabilities, and that different levels of analysis support distinct scientific questions.