All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 9 results for this tag.
Interpretable Semantic Gradients in SSD: A PCA Sweep Approach and a Case Study on AI Discourse
This paper introduces a novel PCA sweep procedure for Supervised Semantic Differential (SSD), a method modeling how text meaning varies with individual differences. The sweep systematically selects the optimal number of PCA components to ensure interpretable and stable semantic gradients, illustrated through a case study on AI discourse related to narcissism.
Transparent AI for Mathematics: Transformer-Based Large Language Models for Mathematical Entity Relationship Extraction with XAI
This study introduces a novel framework for Mathematical Entity Relation Extraction (MERE) using transformer-based models, achieving 99.39% accuracy with BERT. It integrates Explainable AI (XAI) via SHAP to enhance transparency, providing insights into feature importance and model behavior for improved mathematical text understanding.
Vichara: Appellate Judgment Prediction and Explanation for the Indian Judicial System
This paper introduces Vichara, a novel AI framework for predicting and explaining appellate judgments in the Indian judicial system. It utilizes decision point extraction and a structured explanation format to enhance accuracy and interpretability for legal professionals.
Can We Improve Educational Diagram Generation with In-Context Examples? Not if a Hallucination Spoils the Bunch
This paper introduces and evaluates a novel Rhetorical Structure Theory (RST)-based in-context learning method to improve the quality of AI-generated educational diagrams, finding that while it reduces hallucinations, LLMs still struggle with complex inputs and require careful application in educational contexts.
Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of Mind
This paper introduces RebuttalAgent, an AI framework that grounds academic rebuttal in Theory of Mind (ToM) to generate strategic and persuasive responses. It proposes a ToM-Strategy-Response (TSR) pipeline, supported by a large-scale synthetic dataset (RebuttalBench) and a specialized evaluation model (Rebuttal-RM), significantly outperforming existing models in automated and human evaluations.
Advancing credit mobility through stakeholder-informed AI design and adoption
This study develops a stakeholder-informed AI system to improve course articulation and credit transfer between colleges. By addressing concerns about superficial matching and institutional biases, their supervised alignment method achieves a significant accuracy improvement and projects a substantial increase in valid credit mobility opportunities for students.
Adapting Large Language Models to Low-Resource Tibetan: A Two-Stage Continual and Supervised Fine-Tuning Study
This paper introduces a two-stage approach for adapting Qwen2.5-3B to Tibetan, a low-resource language, using Continual Pretraining (CPT) for linguistic grounding and Supervised Fine-Tuning (SFT) for task specialization. The study demonstrates significant improvements in perplexity and translation quality, along with an in-depth analysis of parameter evolution during adaptation.
Tabular Data Understanding with LLMs: A Survey of Recent Advances and Challenges
This paper provides a comprehensive survey of recent advances and challenges in enabling large language models (LLMs) and multimodal LLMs (MLLMs) to understand and process tabular data. It introduces a taxonomy of tabular input representations and categorizes various table understanding tasks, highlighting critical research gaps and future opportunities in the field.
Large Language Models for Generative Information Extraction: A Survey
This survey comprehensively reviews the latest advancements in generative Information Extraction (IE) using Large Language Models (LLMs), categorizing methods by IE subtasks and techniques while identifying future research directions.