Artificial Intelligence for Detecting Fetal Orofacial Clefts and Advancing Medical Education
This paper introduces an artificial intelligence system (AIOC) designed to accurately detect fetal orofacial clefts from ultrasound images and simultaneously enhance the expertise of radiologists. Trained on a large, multi-center dataset, AIOC demonstrates expert-level diagnostic performance and significantly improves junior radiologists' sensitivity, offering a scalable solution for early diagnosis and medical education in resource-limited settings.
PONTE: Personalized Orchestration for Natural Language Trustworthy Explanations
This paper introduces PONTE, a human-in-the-loop framework for generating personalized and trustworthy natural language explanations from AI systems. It employs a closed-loop validation and adaptation process to ensure faithfulness, completeness, and stylistic alignment with user preferences, mitigating common issues associated with large language models in Explainable AI.
How students use generative AI for computational modeling in physics
This paper investigates how physics students utilize generative AI (genAI) for computational modeling in open-ended assignments, revealing its significant impact on planning, implementing, and debugging code. It highlights both the efficiency gains and the risks to learning when students over-rely on genAI without critical verification.
AI End-to-End Radiation Treatment Planning Under One Second
This paper introduces AIRT, an end-to-end deep-learning framework that generates deliverable single-arc VMAT prostate treatment plans in less than one second. The method achieves plan quality comparable to clinical systems, significantly enhancing the efficiency and consistency of radiation therapy planning.
Transparent AI for Mathematics: Transformer-Based Large Language Models for Mathematical Entity Relationship Extraction with XAI
This study introduces a novel framework for Mathematical Entity Relation Extraction (MERE) using transformer-based models, achieving 99.39% accuracy with BERT. It integrates Explainable AI (XAI) via SHAP to enhance transparency, providing insights into feature importance and model behavior for improved mathematical text understanding.