AI Summary • Published on Mar 18, 2026
The increasing autonomy of agentic AI systems presents significant challenges for ensuring security and privacy. As these systems can perceive, reason, and proactively execute complex goals with minimal human intervention, they expand the attack surface and introduce new vulnerabilities, such as cross-agent prompt injection and unauthorized data exfiltration. Privacy risks also escalate beyond model-centric issues to pipeline-centric exposures, involving persistent access to sensitive data and potential cross-jurisdictional compliance issues. Existing regulatory frameworks, like the EU AI Act, are struggling to keep pace with these rapid technical advancements, leading to conceptual ambiguities around key terms (e.g., security, privacy, agentic AI) and fragmented regulatory provisions. This ambiguity makes it difficult for practitioners to interpret obligations and can result in inconsistent compliance and regulatory gaps.
This paper addresses the identified regulatory gap by conducting a comprehensive review and analysis of 24 relevant European Union (EU) AI regulatory documents published between 2024 and 2025. The authors deconstruct and clarify critical definitions of privacy, security, and agentic AI, distinguishing them from closely related concepts to resolve regulatory ambiguities. They then synthesize these documents to articulate the current state of regulatory provisions specifically targeting agentic AI, mapping these provisions against the technical capabilities of autonomous agents to identify areas of robustness and gaps. Finally, the paper reflects on the existing provisions and extracts regulatory recommendations aimed at assisting policymakers, developers, and researchers in aligning security and privacy obligations with the realities of increasing algorithmic agency.
The review of 24 EU AI regulatory documents from 2024-2025 revealed that while general provisions for privacy and personal data protection exist for AI systems, and some dedicated security provisions are available for AI systems, Generative AI (GAI), and General-Purpose AI (GPAI), a significant lack of specific, contextualized provisions for Large Language Models (LLMs) and, critically, for agentic AI. Agentic AI was only formally mentioned in EU regulatory documents starting in October 2025. The analysis clarified the distinction between an AI "model" and an "AI system," highlighting different regulatory implications for providers. Despite the applicability of general provisions, their interpretation for agentic AI remains obscure, creating regulatory uncertainties. The paper found that security provisions for AI are generally risk-oriented, focusing on high-risk AI systems or systemic risks, but lack granular illustrations for different AI types.
This study highlights the urgent need for future regulatory efforts to provide more granular, differentiated, and contextualized privacy and security provisions for various types of AI, especially for the emerging field of agentic AI. The authors suggest that policymakers can learn from past experiences, such as the detailed interpretation of general regulations for specific areas (e.g., GDPR in smart grids), to develop dedicated expert groups and draft specific regulatory documents. Such efforts would significantly improve regulatory literacy, mitigate uncertainties for AI practitioners, and bridge the gap between abstract mandates and implementable controls, ultimately facilitating safer and more compliant development and deployment of autonomous AI systems.