All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 6 results for this tag.
Explainable deep-learning detection of microplastic fibers via polarization-resolved holographic microscopy
This paper introduces an explainable deep-learning framework for the accurate classification of microplastic and natural microfibers using polarization-resolved digital holographic microscopy. By extracting 72 polarization-based features and employing a deep neural network with SHAP analysis, the method achieves high accuracy and identifies key optical properties for distinguishing fiber types.
Evaluating the Ability of Explanations to Disambiguate Models in a Rashomon Set
This paper introduces three principles for evaluating feature-importance explanations and proposes AXE, a novel framework designed to accurately differentiate models within a Rashomon set. AXE effectively detects adversarial fairwashing, where discriminatory model behaviors are intentionally masked by misleading explanations, outperforming existing evaluation metrics.
A Novel Approach to Explainable AI with Quantized Active Ingredients in Decision Making
This paper introduces an explainable AI framework comparing Quantum Boltzmann Machines (QBMs) and Classical Boltzmann Machines (CBMs) to enhance transparency in decision-making. By leveraging quantum principles for richer latent representations and focused feature attributions, the framework demonstrates improved accuracy and clearer identification of key influential features, advancing trustworthy AI systems.
Explainable AI: Learning from the Learners
This perspective paper argues that Explainable AI (XAI) combined with causal reasoning is essential for extracting scientific insights from complex AI models that often outperform human capabilities. It proposes XAI as a unifying framework to foster human-AI collaboration across scientific discovery, engineering optimization, and system certification.
Theory of Mind for Explainable Human-Robot Interaction
This paper proposes considering Theory of Mind (ToM) in Human-Robot Interaction (HRI) as a form of Explainable AI (XAI) and evaluates existing ToM studies using an XAI framework, identifying gaps in assessing explanation fidelity. It advocates for an integrated approach combining ToM's user focus with XAI's technical rigor.
REVEAL: Reasoning-enhanced Forensic Evidence Analysis for Explainable AI-generated Image Detection
This paper introduces REVEAL, a novel framework for explainable AI-generated image detection that establishes a verifiable chain of forensic evidence. It leverages a new dataset, REVEAL-Bench, and an expert-grounded reinforcement learning approach to enhance detection accuracy, explanation fidelity, and cross-model generalization, addressing the limitations of prior methods based on surface-level pattern matching.