All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 2 results for this tag.
Every Picture Tells a Dangerous Story: Memory-Augmented Multi-Agent Jailbreak Attacks on VLMs
This paper introduces MemJack, a memory-augmented multi-agent framework designed to systematically expose visual-semantic vulnerabilities in Vision-Language Models (VLMs). It orchestrates automated jailbreak attacks using unmodified natural images by dynamically mapping visual entities to malicious intents and leverages a persistent memory to transfer successful strategies across different images.
Chameleon: Adaptive Adversarial Agents for Scaling-Based Visual Prompt Injection in Multimodal AI Systems
This paper introduces Chameleon, an adaptive adversarial framework that exploits image downscaling vulnerabilities in Vision-Language Models (VLMs) to inject hidden malicious visual prompts. By employing an iterative, feedback-driven optimization mechanism, Chameleon can craft imperceptible perturbations that hijack VLM execution and compromise agentic decision-making systems.