All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 6 results for this tag.
Can We Improve Educational Diagram Generation with In-Context Examples? Not if a Hallucination Spoils the Bunch
This paper introduces and evaluates a novel Rhetorical Structure Theory (RST)-based in-context learning method to improve the quality of AI-generated educational diagrams, finding that while it reduces hallucinations, LLMs still struggle with complex inputs and require careful application in educational contexts.
Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of Mind
This paper introduces RebuttalAgent, an AI framework that grounds academic rebuttal in Theory of Mind (ToM) to generate strategic and persuasive responses. It proposes a ToM-Strategy-Response (TSR) pipeline, supported by a large-scale synthetic dataset (RebuttalBench) and a specialized evaluation model (Rebuttal-RM), significantly outperforming existing models in automated and human evaluations.
Advancing credit mobility through stakeholder-informed AI design and adoption
This study develops a stakeholder-informed AI system to improve course articulation and credit transfer between colleges. By addressing concerns about superficial matching and institutional biases, their supervised alignment method achieves a significant accuracy improvement and projects a substantial increase in valid credit mobility opportunities for students.
Adapting Large Language Models to Low-Resource Tibetan: A Two-Stage Continual and Supervised Fine-Tuning Study
This paper introduces a two-stage approach for adapting Qwen2.5-3B to Tibetan, a low-resource language, using Continual Pretraining (CPT) for linguistic grounding and Supervised Fine-Tuning (SFT) for task specialization. The study demonstrates significant improvements in perplexity and translation quality, along with an in-depth analysis of parameter evolution during adaptation.
Tabular Data Understanding with LLMs: A Survey of Recent Advances and Challenges
This paper provides a comprehensive survey of recent advances and challenges in enabling large language models (LLMs) and multimodal LLMs (MLLMs) to understand and process tabular data. It introduces a taxonomy of tabular input representations and categorizes various table understanding tasks, highlighting critical research gaps and future opportunities in the field.
Large Language Models for Generative Information Extraction: A Survey
This survey comprehensively reviews the latest advancements in generative Information Extraction (IE) using Large Language Models (LLMs), categorizing methods by IE subtasks and techniques while identifying future research directions.