All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 33 results for this tag.
CORE:Toward Ubiquitous 6G Intelligence Through Collaborative Orchestration of Large Language Model Agents Over Hierarchical Edge
CORE is a novel framework that orchestrates collaborative Large Language Model (LLM) agents across hierarchical 6G edge networks to enable ubiquitous intelligence. It addresses the challenges of fragmented resources by integrating real-time perception, dynamic role orchestration, and pipeline-parallel execution, significantly enhancing system efficiency and task completion in various 6G applications.
The Plausibility Trap: Using Probabilistic Engines for Deterministic Tasks
This paper defines the "Plausibility Trap," a phenomenon where individuals over-rely on expensive probabilistic Large Language Models (LLMs) for simple deterministic tasks, leading to significant resource waste and risks like algorithmic sycophancy. It introduces a framework for proper tool selection and advocates for a curriculum shift in digital literacy.
A Universal Large Language Model -- Drone Command and Control Interface
This paper introduces a universal and versatile interface for controlling drones using large language models (LLMs) via the new Model Context Protocol (MCP) standard. It enables LLMs to command both real and simulated drones, dynamically integrating real-time situational data like maps for complex missions.
Agentic Artificial Intelligence (AI): Architectures, Taxonomies, and Evaluation of Large Language Model Agents
This paper provides a comprehensive review of Agentic AI, exploring the architectural shift from static text generation to autonomous systems that perceive, reason, plan, and act. It proposes a unified taxonomy and evaluates current practices, highlighting key challenges and future research directions for robust LLM agents.
H-AIM: Orchestrating LLMs, PDDL, and Behavior Trees for Hierarchical Multi-Robot Planning
H-AIM is a novel framework that integrates Large Language Models (LLMs), PDDL planning, and Behavior Trees to enable heterogeneous robot teams to perform complex, long-horizon tasks from high-level instructions. It significantly improves task success rates and collaborative robustness in multi-robot planning.
SynCraft: Guiding Large Language Models to Predict Edit Sequences for Molecular Synthesizability Optimization
SynCraft is a reasoning-based framework that uses Large Language Models to predict precise atom-level edit sequences for molecular synthesizability optimization. It addresses the critical bottleneck of generating synthetically inaccessible molecules in AI-driven drug discovery, outperforming existing methods in generating synthesizable analogs with high structural fidelity.
STARS: Semantic Tokens with Augmented Representations for Recommendation at Scale
STARS is a Transformer-based sequential recommendation framework designed for large-scale e-commerce. It addresses cold-start items, diverse user intent, and latency constraints by combining LLM-augmented item semantics, dual-memory user embeddings, context-aware scoring, and an efficient two-stage retrieval pipeline.
LexGenius: An Expert-Level Benchmark for Large Language Models in Legal General Intelligence
The paper introduces LexGenius, a comprehensive, expert-level Chinese legal benchmark designed to systematically evaluate the legal general intelligence of Large Language Models (LLMs). It utilizes a multi-dimensional framework and a large dataset of carefully curated legal questions to reveal significant gaps between LLMs and human legal professionals, particularly in areas requiring soft legal intelligence and nuanced judgment.
Nex-N1: Agentic Models Trained via a Unified Ecosystem for Large-Scale Environment Construction
The paper introduces a comprehensive method and ecosystem (NexAU, NexA4A, NexGAP) to overcome limitations in scaling interactive environments for training agentic Large Language Models (LLMs). This infrastructure enables the systematic generation of diverse, complex, and realistically grounded interaction trajectories for LLMs.
Spatially-Enhanced Retrieval-Augmented Generation for Walkability and Urban Discovery
This paper introduces WalkRAG, a spatial Retrieval-Augmented Generation (RAG) framework that leverages Large Language Models (LLMs) to recommend personalized and walkable urban itineraries. It addresses known LLM limitations in spatial reasoning and factual accuracy by integrating spatial and contextual urban knowledge for enhanced route generation and point-of-interest information retrieval.