AI Summary • Published on Mar 24, 2026
The increasing adoption of artificial intelligence for cognitive tasks raises critical questions about the impact on human capabilities. Despite qualitative warnings and historical precedents of skill atrophy with technological substitution (e.g., GPS use leading to spatial memory decline, AI-assisted colonoscopy degrading detection rates without AI), there has been no quantitative framework to predict when human capability loss due to AI delegation becomes catastrophic. Existing models focus on adoption dynamics or economic displacement but fail to capture the feedback loop between delegation and capability loss, leaving a significant gap in understanding the long-term societal consequences.
This paper introduces a minimal two-variable dynamical systems model, composed of ordinary differential equations (ODEs), to quantitatively analyze human-AI interaction. The model couples human capability (H) and delegation rate (D) and is built upon three fundamental axioms: learning requires existing capability, learning requires practice, and disuse leads to forgetting. The capability equation accounts for logistic learning during human-performed tasks and decay proportional to AI delegation. The delegation equation models rational adoption influenced by AI capability and social contagion. The model was calibrated against empirical deskilling data from four diverse domains (education, medical endoscopy, spatial cognition, aviation), demonstrating its ability to reproduce observed capability declines by varying only the forgetting rate. Further rigorous quantitative validation was performed by fitting the ODE model to a 15-country panel of OECD PISA mathematics assessment data (102 points, R^2=0.946), demonstrating its parsimonious global structure. An agent-based model (ABM) was also used to explore stochastic dynamics and phase transitions.
The model reveals several key findings. First, it identifies a critical AI capability threshold, K\* approximately 0.85, beyond which human capability collapses abruptly, a phenomenon termed the "enrichment paradox." This transition is sharp, not gradual, meaning incremental AI improvements can trigger discontinuous societal consequences. Second, the model predicts an "antifragility effect": introducing periodic AI failures paradoxically strengthens human capability. Simulations show that a 25% crisis frequency can lead to a 2.7-fold increase in equilibrium human capability compared to perfectly reliable AI. Third, mandatory practice policies are highly effective; a policy requiring 20% of tasks to be performed without AI assistance preserves 92% more capability than the baseline, exhibiting a superlinear relationship between practice fraction and capability preservation. Finally, the model predicts that the dependent state, where human capability is near zero, is a near-absorbing attractor. Recovery from this state is prohibitively slow, implying that once a capability is fully delegated to AI, its loss is practically irreversible within typical institutional planning horizons.
The findings have profound implications for AI governance. Policies should shift from merely considering AI adoption to managing AI capability thresholds in specific domains, as crossing K\* can lead to catastrophic deskilling. Designing deliberate practice opportunities and "fire drills" into human-AI workflows is crucial to foster antifragility, counteracting the vulnerability introduced by perfectly reliable AI. The irreversibility of capability loss underscores the long-term societal risk; once skills are lost, recovery is extremely difficult. Therefore, modest mandatory practice requirements, like one AI-free workday per week, offer a high-leverage intervention to preserve essential human capabilities, aligning with existing practices in fields like aviation. The model suggests monitoring early-warning signals in skill metrics to anticipate critical transitions.