AI Summary • Published on Mar 8, 2026
The paper identifies a crucial unresolved theoretical question in AGI: what structural principle differentiates scalable task generalization from genuinely general intelligence? Existing frameworks, like statistical learning theory, reinforcement learning, and domain adaptation, provide local guarantees within fixed hypothesis classes, representations, or interfaces. However, modern AI systems, especially agentic deployments, operate under continuous structural transformations, including new tools, shifting objectives, multi-regime evaluation, and evolving memory. This fragmentation in addressing dynamic interfaces and evaluative conditions, where norms and evaluators are often treated as external or fragmented modules, poses significant problems for controllability and safety, especially with vulnerabilities like prompt injection and memory poisoning. The core problem is the need for a framework that enables generalization not just across tasks, but also under admissible changes to the learning interface itself, while maintaining coherence and control.
SMGI proposes a structural theory of general artificial intelligence by introducing a formal meta-model θ = (r, H, Π, L, E, M). This meta-model explicitly treats representational maps (r), hypothesis spaces (H), structural priors (Π), multi-regime evaluators (L), environment classes (E), and memory operators (M) as dynamic, typed components. A key distinction is made between this structural ontology (θ) and its induced behavioral semantics (Tθ), allowing for the definition of general artificial intelligence as a class of admissible coupled dynamics ((θ, Tθ)). These dynamics must satisfy four obligations: (1) Structural Closure: The system remains well-formed under a typed class of admissible task and interface transformations. (2) Dynamical Stability: Certified sequential adaptation preserves dynamic stability under regime switching and memory interaction (e.g., via Lyapunov-like witnesses). (3) Bounded Statistical Capacity: Statistical capacity remains controlled along admissible evolutions. (4) Evaluative Invariance: Evaluators and norms remain invariant across regime shifts, or are updated only via certified meta-transformations. The framework emphasizes that evaluation and representation are internalized as first-class components, not external elements. It also introduces concepts like certified evaluator updates, structural risk minimization for the entire meta-model, and structural priors for capacity control under evolving interfaces.
The paper proves several key theoretical results. It establishes a structural generalization bound that integrates sequential PAC-Bayes analysis and Lyapunov stability, providing sufficient conditions for capacity control and bounded drift under admissible task transformations. This means that the system can generalize not just over data, but over changes in its own structure, with quantifiable guarantees. Furthermore, a strict structural inclusion theorem demonstrates that classical learning paradigms such as empirical risk minimization, reinforcement learning, Solomonoff-style universal induction, and modern agentic pipelines (like LLMs with tool use) are all structurally restricted instances of SMGI. This means SMGI offers a superset framework where the limitations of these classical methods (e.g., fixed evaluators, static representations) are explicitly addressed by making these components dynamic and subject to certified evolution. The framework also outlines an empirical protocol for measuring structural growth and memory-governed stability in long-horizon, nonstationary regimes.
SMGI offers a foundational shift in understanding AGI, moving beyond scale and task performance to focus on structural integrity and certified evolution. By explicitly formalizing representation, evaluation, and memory as dynamic components, it provides a mathematical framework for designing AGI systems that can remain coherent, controllable, and safe amidst evolving tasks and interfaces. This has profound implications for AI safety and alignment, as it allows for the explicit embedding of normative constraints and protected evaluative cores directly into the system's meta-model, rather than relying on external guardrails or post-training adjustments. The theory provides a roadmap for addressing issues like catastrophic forgetting (by preserving invariants across memory strata) and alignment drift (by ensuring growth remains within predefined admissible manifolds). It also enables a unified structural perspective on core challenges in AI, such as robustness, transfer, and self-modification, by framing them as instances of preserving admissible invariants under structural expansion. The framework proposes a falsifiable criterion for AGI certification, shifting the scientific burden from empirical robustness claims to specifying verifiable certificates and admissible update rules for structural evolution.