All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 9 results for this tag.
Agentic AI-Driven UAV Network Deployment: A LLM-Enhanced Exact Potential Game Approach
This paper proposes an Agentic AI-driven dual spatial-scale optimization framework for Unmanned Aerial Vehicular Networks (UAVNs) deployment. It leverages exact potential games (EPGs) at different scales and integrates a large language model (LLM) to enhance utility weight generation, leading to improved energy efficiency, latency, and throughput.
The Auton Agentic AI Framework
The Auton Agentic AI Framework introduces a principled architecture to bridge the gap between stochastic Large Language Model outputs and the deterministic requirements of backend systems, standardizing the creation, execution, and governance of autonomous agent systems. It achieves this through a declarative agent specification, hierarchical memory, built-in safety mechanisms, and runtime optimizations for improved reliability and performance.
Agentic AI for Intent-driven Optimization in Cell-free O-RAN
This paper proposes an agentic AI framework for intent-driven optimization in cell-free Open Radio Access Networks (O-RAN), where LLM-based agents collaborate to translate operator intents into network optimizations. The framework demonstrates significant reductions in active O-RUs for energy saving and memory usage through parameter-efficient fine-tuning.
Agentic Artificial Intelligence (AI): Architectures, Taxonomies, and Evaluation of Large Language Model Agents
This paper provides a comprehensive review of Agentic AI, exploring the architectural shift from static text generation to autonomous systems that perceive, reason, plan, and act. It proposes a unified taxonomy and evaluates current practices, highlighting key challenges and future research directions for robust LLM agents.
Toward Ultra-Long-Horizon Agentic Science: Cognitive Accumulation for Machine Learning Engineering
This paper introduces ML-Master 2.0, an autonomous agent designed for ultra-long-horizon machine learning engineering. It leverages Hierarchical Cognitive Caching (HCC) to manage context through cognitive accumulation, achieving a state-of-the-art 56.44% medal rate on OpenAI's MLE-Bench.
Agentic AI-Enhanced Semantic Communications: Foundations, Architecture, and Applications
This paper systematically explores how agentic AI, with its perception, memory, reasoning, and action capabilities, enhances semantic communications for 6G networks. It proposes a unified framework and demonstrates its effectiveness through an agentic knowledge base-based joint source-channel coding case study, showing improved information reconstruction quality.
Architectures for Building Agentic AI
This chapter surveys architectural choices for building reliable agentic AI systems, arguing that reliability is primarily an architectural property derived from system decomposition, interface enforcement, and control loops. It explores various design patterns and engineering practices crucial for dependable autonomous systems.
Nex-N1: Agentic Models Trained via a Unified Ecosystem for Large-Scale Environment Construction
The paper introduces a comprehensive method and ecosystem (NexAU, NexA4A, NexGAP) to overcome limitations in scaling interactive environments for training agentic Large Language Models (LLMs). This infrastructure enables the systematic generation of diverse, complex, and realistically grounded interaction trajectories for LLMs.
David vs. Goliath: Can Small Models Win Big with Agentic AI in Hardware Design?
This paper explores whether small language models (SLMs), when integrated into sophisticated agentic AI frameworks, can achieve performance comparable to large language models (LLMs) for hardware design tasks. It demonstrates that strategic task decomposition and iterative refinement enable SLMs to offer significant efficiency and cost advantages without sacrificing quality, challenging the notion that bigger models are always better.