AI Summary • Published on Mar 22, 2026
The increasing prevalence of Generative Artificial Intelligence (AI) has shifted the benign cognitive offloading typically associated with tool use into a significant risk of cognitive agency surrender. This issue is exacerbated by a commercial design philosophy that prioritizes "zero-friction" interfaces. Such designs exploit human tendencies toward "cognitive miserliness," prematurely satisfying the need for cognitive closure and fostering severe automation bias. Empirical analysis of 1,223 high-confidence AI-HCI papers from 2023 to early 2026 revealed an escalating "agentic takeover," where research defending human epistemic sovereignty declined, while efforts to optimize autonomous machine agents surged, and frictionless usability maintained a dominant position. This trajectory indicates a systemic erosion of human cognitive independence, transforming assistive technology into a potential threat to critical thinking and decision-making.
To counteract the surrender of cognitive agency, this paper proposes "Scaffolded Cognitive Friction," a paradigm shift that repurposes Multi-Agent Systems (MAS). Instead of optimizing for premature consensus, MAS are designed to function as explicit cognitive forcing functions, such as computational Devil's Advocates. These systems deliberately inject germane epistemic tension by exposing structured, machine-generated logical divergence, thereby disrupting heuristic execution and compelling System 2 analytical deliberation. To empirically quantify the effectiveness of this intervention, the authors outline a multimodal computational phenotyping agenda. This involves integrating high-fidelity markers like gaze transition entropy, task-evoked pupillometry, and functional Near-Infrared Spectroscopy (fNIRS), alongside Hierarchical Drift Diffusion Modeling (HDDM), to mathematically decouple decision outcomes from the intensity of cognitive effort and establish an objective ground truth for synergistic human-AI engagement.
A zero-shot semantic classification pipeline applied to 1,223 AI-HCI papers from 2023 to early 2026 revealed a stark "agentic takeover" and the persistent hegemony of frictionless design. Specifically, 67.3% of the field maintained a frictionless paradigm in 2026. Research defending human epistemic sovereignty saw a brief surge to 19.1% in 2025, but abruptly fell to 13.1% in early 2026. This decline coincided with an explosive increase in research optimizing autonomous machine agents, which doubled to 19.6%. The proposed Scaffolded Cognitive Friction mechanically counteracts automation bias by creating explicit informational conflict, which flattens the starting-point bias (zz) in Hierarchical Drift Diffusion Models back to a neutral baseline. This forces users to rely on a high drift rate (vv), driven by active cross-examination and genuine evidence accumulation, to reach a decision threshold, thereby verifying successful cognitive effort. The model also suggests dynamic moderation of friction based on a user's domain expertise to prevent "friction shock."
The habitual abdication of logical deduction and ethical adjudication due to frictionless AI poses a macroscopic societal hazard, particularly threatening vulnerable populations and high-stakes socio-technical environments. This paper argues that intentionally designed cognitive friction is not merely a psychological intervention but a fundamental technical prerequisite for enforcing global AI governance and preserving societal cognitive resilience. Frameworks like the EU AI Act mandate "substantive human oversight" for high-risk AI, which becomes merely a formality with frictionless interfaces, turning human operators into "moral crumple zones." Mandatory friction is advocated for high-stakes sensemaking domains like cognitive security, cyber-physical systems, healthcare, and judiciary contexts, while explicitly prohibited in low-risk or time-critical scenarios. The authors call for a reconstruction of evaluation frameworks, moving beyond superficial usability metrics to measure epistemic engagement and resilience through multimodal evaluation architectures and advanced Bayesian cognitive modeling. Ultimately, transforming AI into a civic epistemic infrastructure equipped with meaningful, scaffolded friction provides a tangible path for substantive human oversight and the defense of human epistemic sovereignty.