AI Summary • Published on Mar 10, 2026
As artificial intelligence becomes increasingly integrated into daily workflows, humans frequently face decisions about when to rely on AI advice. These choices are profoundly shaped by general efficacy beliefs—individuals' confidence in their own capabilities (self-efficacy) and their perceptions of AI competence (AI efficacy). While prior research has examined factors influencing AI reliance, there remains a critical gap in understanding how these stable, general efficacy beliefs dynamically translate into instance-wise efficacy judgments made for specific tasks. This gap is crucial because this translation can be systematically influenced by pre-existing beliefs, potentially leading to miscalibrated delegation behaviors that undermine effective human-AI collaboration and overall team performance.
To investigate this phenomenon, a controlled behavioral study was conducted with 240 participants, who engaged in an income classification task requiring repeated delegation decisions to an AI. The experiment employed a 2x2 factorial design, manipulating the availability of contextual information: a control group received no additional information, a "data" group received information about data distributions, an "AI" group received information on AI performance, and a "combined" group received both. Before the task, participants rated their general self-efficacy and AI efficacy beliefs. During the task, they continuously provided instance-wise efficacy judgments for 12 randomly selected instances before deciding whether to solve the instance themselves or delegate it to the AI. Participants received no performance feedback during this phase to ensure that judgments were based solely on instance features and contextual information.
The study revealed that general efficacy beliefs act as persistent cognitive anchors for instance-wise judgments, though with asymmetric effects. Self-efficacy judgments remained closely aligned with general beliefs across all conditions, indicating well-calibrated self-assessment. In contrast, AI efficacy judgments consistently showed an "AI optimism" bias, where participants rated AI capabilities higher for specific instances than their general beliefs, a bias that was only eliminated when specific AI performance information was provided. Contextual information was found to asymmetrically amplify delegation behavior: data and AI information strengthened participants' tendency to retain control when their instance-wise self-efficacy was higher than their general belief, while all types of contextual information amplified delegation when instance-wise AI efficacy was rated higher than general beliefs. Crucially, efficacy discrepancies had a significantly larger impact on delegation behavior than on actual human-AI team performance, highlighting a disconnect where humans' intuitive delegation strategies often diverge from optimal collaborative outcomes.
These findings challenge traditional transparency-focused AI design, suggesting that merely providing more information may not improve collaboration outcomes and can even exacerbate metacognitive biases. The paper advocates for a shift towards interventions that target foundational general efficacy beliefs before task engagement, complementing instance-level transparency features. Design guidelines propose making anchoring biases and AI optimism patterns visible to users, aligning delegation decisions with actual performance outcomes through feedback mechanisms, and distinguishing between information for understanding (calibration) and information for action (decision support). This approach aims to foster collaborative systems that leverage human agency more effectively while actively mitigating systematic biases that hinder optimal human-AI teamwork.