All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 2 results for this tag.
Adapting Large Language Models to Low-Resource Tibetan: A Two-Stage Continual and Supervised Fine-Tuning Study
This paper introduces a two-stage approach for adapting Qwen2.5-3B to Tibetan, a low-resource language, using Continual Pretraining (CPT) for linguistic grounding and Supervised Fine-Tuning (SFT) for task specialization. The study demonstrates significant improvements in perplexity and translation quality, along with an in-depth analysis of parameter evolution during adaptation.
Large Language Models for Generative Information Extraction: A Survey
This survey comprehensively reviews the latest advancements in generative Information Extraction (IE) using Large Language Models (LLMs), categorizing methods by IE subtasks and techniques while identifying future research directions.