All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 6 results for this tag.
Agentic Artificial Intelligence (AI): Architectures, Taxonomies, and Evaluation of Large Language Model Agents
This paper provides a comprehensive review of Agentic AI, exploring the architectural shift from static text generation to autonomous systems that perceive, reason, plan, and act. It proposes a unified taxonomy and evaluates current practices, highlighting key challenges and future research directions for robust LLM agents.
Toward Ultra-Long-Horizon Agentic Science: Cognitive Accumulation for Machine Learning Engineering
This paper introduces ML-Master 2.0, an autonomous agent designed for ultra-long-horizon machine learning engineering. It leverages Hierarchical Cognitive Caching (HCC) to manage context through cognitive accumulation, achieving a state-of-the-art 56.44% medal rate on OpenAI's MLE-Bench.
Agentic AI-Enhanced Semantic Communications: Foundations, Architecture, and Applications
This paper systematically explores how agentic AI, with its perception, memory, reasoning, and action capabilities, enhances semantic communications for 6G networks. It proposes a unified framework and demonstrates its effectiveness through an agentic knowledge base-based joint source-channel coding case study, showing improved information reconstruction quality.
Architectures for Building Agentic AI
This chapter surveys architectural choices for building reliable agentic AI systems, arguing that reliability is primarily an architectural property derived from system decomposition, interface enforcement, and control loops. It explores various design patterns and engineering practices crucial for dependable autonomous systems.
Nex-N1: Agentic Models Trained via a Unified Ecosystem for Large-Scale Environment Construction
The paper introduces a comprehensive method and ecosystem (NexAU, NexA4A, NexGAP) to overcome limitations in scaling interactive environments for training agentic Large Language Models (LLMs). This infrastructure enables the systematic generation of diverse, complex, and realistically grounded interaction trajectories for LLMs.
David vs. Goliath: Can Small Models Win Big with Agentic AI in Hardware Design?
This paper explores whether small language models (SLMs), when integrated into sophisticated agentic AI frameworks, can achieve performance comparable to large language models (LLMs) for hardware design tasks. It demonstrates that strategic task decomposition and iterative refinement enable SLMs to offer significant efficiency and cost advantages without sacrificing quality, challenging the notion that bigger models are always better.