The First Generation of AI-Assisted Programming Learners: Gendered Patterns in Critical Thinking and AI Ethics of German Secondary School Students
This study investigates how German secondary school students critically engage with AI-assisted programming tools and perceive ethical responsibilities. It reveals an "AI paradox" where students show strong ethical awareness but integrate AI-generated code without full understanding, with notable gendered differences in usage and trust.
Towards Semantic-based Agent Communication Networks: Vision, Technologies, and Challenges
This paper proposes a novel architecture for semantic-based agent communication networks, addressing the limitations of traditional communication paradigms in the context of agentic AI and 6G. It systematically reviews state-of-the-art technologies across proposed layers, entities, and stages, and identifies key challenges for future research.
Generative Artificial Intelligence and the Knowledge Gap: Toward a New Form of Informational Inequality
This paper proposes a theoretical extension of the knowledge gap hypothesis to understand emerging forms of informational inequality driven by generative AI. It argues that while access to AI is widespread, disparities will arise from users' critical evaluation skills when interacting with AI-generated content, with higher education fostering more critical engagement.
The enrichment paradox: critical capability thresholds and irreversible dependency in human-AI symbiosis
A novel dynamical systems model predicts a critical AI capability threshold, beyond which human skills abruptly collapse due to delegation, a phenomenon termed the "enrichment paradox." The model suggests that periodic AI failures and mandatory practice can paradoxically preserve human capabilities and highlights the irreversible nature of profound skill loss.
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
This paper introduces a Markovian framework to audit the reliability and oversight cost of agentic AI systems in organizational workflows before deployment. It reveals the "stochastic gap," where systems may appear state-level supported but possess blind spots in next-step decisions, impacting reliability and increasing human oversight.