All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 36 results for this tag.
Language Model Teams as Distributed Systems
This paper proposes viewing large language model (LLM) teams through the lens of distributed systems to create a principled framework for their design and evaluation. It reveals that many established advantages and challenges from distributed computing, such as scalability limits and coordination issues, directly apply to and explain the behavior of LLM teams.
From Thinker to Society: Security in Hierarchical Autonomy Evolution of AI Agents
This paper introduces the Hierarchical Autonomy Evolution (HAE) framework, a novel approach to categorizing security vulnerabilities in AI agents as they evolve from cognitive entities to collective societies. It details a taxonomy of threats across three levels of autonomy, highlighting critical research gaps and guiding the development of robust, multilayered defense architectures for trustworthy AI agent systems.
The Auton Agentic AI Framework
The Auton Agentic AI Framework introduces a principled architecture to bridge the gap between stochastic Large Language Model outputs and the deterministic requirements of backend systems, standardizing the creation, execution, and governance of autonomous agent systems. It achieves this through a declarative agent specification, hierarchical memory, built-in safety mechanisms, and runtime optimizations for improved reliability and performance.
CORE:Toward Ubiquitous 6G Intelligence Through Collaborative Orchestration of Large Language Model Agents Over Hierarchical Edge
CORE is a novel framework that orchestrates collaborative Large Language Model (LLM) agents across hierarchical 6G edge networks to enable ubiquitous intelligence. It addresses the challenges of fragmented resources by integrating real-time perception, dynamic role orchestration, and pipeline-parallel execution, significantly enhancing system efficiency and task completion in various 6G applications.
A Universal Large Language Model -- Drone Command and Control Interface
This paper introduces a universal and versatile interface for controlling drones using large language models (LLMs) via the new Model Context Protocol (MCP) standard. It enables LLMs to command both real and simulated drones, dynamically integrating real-time situational data like maps for complex missions.
The Plausibility Trap: Using Probabilistic Engines for Deterministic Tasks
This paper defines the "Plausibility Trap," a phenomenon where individuals over-rely on expensive probabilistic Large Language Models (LLMs) for simple deterministic tasks, leading to significant resource waste and risks like algorithmic sycophancy. It introduces a framework for proper tool selection and advocates for a curriculum shift in digital literacy.
Agentic Artificial Intelligence (AI): Architectures, Taxonomies, and Evaluation of Large Language Model Agents
This paper provides a comprehensive review of Agentic AI, exploring the architectural shift from static text generation to autonomous systems that perceive, reason, plan, and act. It proposes a unified taxonomy and evaluates current practices, highlighting key challenges and future research directions for robust LLM agents.
H-AIM: Orchestrating LLMs, PDDL, and Behavior Trees for Hierarchical Multi-Robot Planning
H-AIM is a novel framework that integrates Large Language Models (LLMs), PDDL planning, and Behavior Trees to enable heterogeneous robot teams to perform complex, long-horizon tasks from high-level instructions. It significantly improves task success rates and collaborative robustness in multi-robot planning.
SynCraft: Guiding Large Language Models to Predict Edit Sequences for Molecular Synthesizability Optimization
SynCraft is a reasoning-based framework that uses Large Language Models to predict precise atom-level edit sequences for molecular synthesizability optimization. It addresses the critical bottleneck of generating synthetically inaccessible molecules in AI-driven drug discovery, outperforming existing methods in generating synthesizable analogs with high structural fidelity.
STARS: Semantic Tokens with Augmented Representations for Recommendation at Scale
STARS is a Transformer-based sequential recommendation framework designed for large-scale e-commerce. It addresses cold-start items, diverse user intent, and latency constraints by combining LLM-augmented item semantics, dual-memory user embeddings, context-aware scoring, and an efficient two-stage retrieval pipeline.