All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 2 results for this tag.
AgentGuardian: Learning Access Control Policies to Govern AI Agent Behavior
AgentGuardian is a novel security framework that enhances AI agent safety by automatically learning context-aware access control policies from benign execution traces. It enforces these policies at the tool level and validates execution flow integrity, effectively detecting malicious inputs and mitigating hallucination-driven errors.
Chameleon: Adaptive Adversarial Agents for Scaling-Based Visual Prompt Injection in Multimodal AI Systems
This paper introduces Chameleon, an adaptive adversarial framework that exploits image downscaling vulnerabilities in Vision-Language Models (VLMs) to inject hidden malicious visual prompts. By employing an iterative, feedback-driven optimization mechanism, Chameleon can craft imperceptible perturbations that hijack VLM execution and compromise agentic decision-making systems.