All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 11 results for this tag.
PONTE: Personalized Orchestration for Natural Language Trustworthy Explanations
This paper introduces PONTE, a human-in-the-loop framework for generating personalized and trustworthy natural language explanations from AI systems. It employs a closed-loop validation and adaptation process to ensure faithfulness, completeness, and stylistic alignment with user preferences, mitigating common issues associated with large language models in Explainable AI.
Transparent AI for Mathematics: Transformer-Based Large Language Models for Mathematical Entity Relationship Extraction with XAI
This study introduces a novel framework for Mathematical Entity Relation Extraction (MERE) using transformer-based models, achieving 99.39% accuracy with BERT. It integrates Explainable AI (XAI) via SHAP to enhance transparency, providing insights into feature importance and model behavior for improved mathematical text understanding.
Fusion-CAM: Integrating Gradient and Region-Based Class Activation Maps for Robust Visual Explanations
Fusion-CAM is a novel framework that unifies gradient-based and region-based Class Activation Map (CAM) methods through a dedicated fusion mechanism. It aims to provide robust and highly discriminative visual explanations by first denoising gradient-based maps and then adaptively combining them with region-based maps to enhance class coverage and precision, outperforming existing CAM variants.
On the Explainability of Vision-Language Models in Art History
This paper investigates the effectiveness of Explainable Artificial Intelligence (XAI) methods in making Vision-Language Models (VLMs), specifically CLIP, interpretable within art-historical contexts. It evaluates seven XAI methods through zero-shot localization experiments and human interpretability studies, concluding that their effectiveness depends on the conceptual stability and representational availability of the examined categories.
Defining Explainable AI for Requirements Analysis
This paper proposes a novel three-dimensional framework—Source, Depth, and Scope—for categorizing the explanatory requirements of AI applications. This framework aims to standardize the definition of explainable AI, helping to match specific application needs with the capabilities of different machine learning techniques, thereby building trust in AI systems.
Explainable deep-learning detection of microplastic fibers via polarization-resolved holographic microscopy
This paper introduces an explainable deep-learning framework for the accurate classification of microplastic and natural microfibers using polarization-resolved digital holographic microscopy. By extracting 72 polarization-based features and employing a deep neural network with SHAP analysis, the method achieves high accuracy and identifies key optical properties for distinguishing fiber types.
Evaluating the Ability of Explanations to Disambiguate Models in a Rashomon Set
This paper introduces three principles for evaluating feature-importance explanations and proposes AXE, a novel framework designed to accurately differentiate models within a Rashomon set. AXE effectively detects adversarial fairwashing, where discriminatory model behaviors are intentionally masked by misleading explanations, outperforming existing evaluation metrics.
A Novel Approach to Explainable AI with Quantized Active Ingredients in Decision Making
This paper introduces an explainable AI framework comparing Quantum Boltzmann Machines (QBMs) and Classical Boltzmann Machines (CBMs) to enhance transparency in decision-making. By leveraging quantum principles for richer latent representations and focused feature attributions, the framework demonstrates improved accuracy and clearer identification of key influential features, advancing trustworthy AI systems.
Explainable AI: Learning from the Learners
This perspective paper argues that Explainable AI (XAI) combined with causal reasoning is essential for extracting scientific insights from complex AI models that often outperform human capabilities. It proposes XAI as a unifying framework to foster human-AI collaboration across scientific discovery, engineering optimization, and system certification.
Theory of Mind for Explainable Human-Robot Interaction
This paper proposes considering Theory of Mind (ToM) in Human-Robot Interaction (HRI) as a form of Explainable AI (XAI) and evaluates existing ToM studies using an XAI framework, identifying gaps in assessing explanation fidelity. It advocates for an integrated approach combining ToM's user focus with XAI's technical rigor.