All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 15 results for this tag.
PREF-XAI: Preference-Based Personalized Rule Explanations of Black-Box Machine Learning Models
This paper introduces PREF-XAI, a novel approach for generating personalized, rule-based explanations of black-box machine learning models. It reframes explanation as a preference-driven decision problem, learning individual user preferences through robust ordinal regression to tailor explanations.
Dual-Modal Lung Cancer AI: Interpretable Radiology and Microscopy with Clinical Risk Integration
This study introduces a dual-modal AI framework combining CT radiology and H&E microscopy with clinical data for improved lung cancer diagnosis and subtype classification. The system demonstrates high accuracy and interpretability, offering a more robust and transparent approach to overcome the limitations of single-modality diagnostic methods.
A Two-Stage LLM Framework for Accessible and Verified XAI Explanations
Current methods using LLMs to translate technical XAI outputs into natural language often lack guarantees of accuracy and completeness. This paper introduces a Two-Stage LLM Meta-Verification Framework that employs an Explainer LLM for generating explanations and a Verifier LLM to assess and refine them iteratively, significantly enhancing the trustworthiness and accessibility of XAI.
Deep learning of committor and explainable artificial intelligence analysis for identifying reaction coordinates
This review introduces a framework that combines deep learning with committor analysis and explainable AI (XAI) to systematically identify reaction coordinates in complex molecular systems. The approach enables the quantitative assessment of individual input variable contributions, enhancing the interpretability of molecular transition pathways.
PONTE: Personalized Orchestration for Natural Language Trustworthy Explanations
This paper introduces PONTE, a human-in-the-loop framework for generating personalized and trustworthy natural language explanations from AI systems. It employs a closed-loop validation and adaptation process to ensure faithfulness, completeness, and stylistic alignment with user preferences, mitigating common issues associated with large language models in Explainable AI.
Transparent AI for Mathematics: Transformer-Based Large Language Models for Mathematical Entity Relationship Extraction with XAI
This study introduces a novel framework for Mathematical Entity Relation Extraction (MERE) using transformer-based models, achieving 99.39% accuracy with BERT. It integrates Explainable AI (XAI) via SHAP to enhance transparency, providing insights into feature importance and model behavior for improved mathematical text understanding.
Fusion-CAM: Integrating Gradient and Region-Based Class Activation Maps for Robust Visual Explanations
Fusion-CAM is a novel framework that unifies gradient-based and region-based Class Activation Map (CAM) methods through a dedicated fusion mechanism. It aims to provide robust and highly discriminative visual explanations by first denoising gradient-based maps and then adaptively combining them with region-based maps to enhance class coverage and precision, outperforming existing CAM variants.
On the Explainability of Vision-Language Models in Art History
This paper investigates the effectiveness of Explainable Artificial Intelligence (XAI) methods in making Vision-Language Models (VLMs), specifically CLIP, interpretable within art-historical contexts. It evaluates seven XAI methods through zero-shot localization experiments and human interpretability studies, concluding that their effectiveness depends on the conceptual stability and representational availability of the examined categories.
Defining Explainable AI for Requirements Analysis
This paper proposes a novel three-dimensional framework—Source, Depth, and Scope—for categorizing the explanatory requirements of AI applications. This framework aims to standardize the definition of explainable AI, helping to match specific application needs with the capabilities of different machine learning techniques, thereby building trust in AI systems.
Explainable deep-learning detection of microplastic fibers via polarization-resolved holographic microscopy
This paper introduces an explainable deep-learning framework for the accurate classification of microplastic and natural microfibers using polarization-resolved digital holographic microscopy. By extracting 72 polarization-based features and employing a deep neural network with SHAP analysis, the method achieves high accuracy and identifies key optical properties for distinguishing fiber types.