Deep learning of committor and explainable artificial intelligence analysis for identifying reaction coordinates
This review introduces a framework that combines deep learning with committor analysis and explainable AI (XAI) to systematically identify reaction coordinates in complex molecular systems. The approach enables the quantitative assessment of individual input variable contributions, enhancing the interpretability of molecular transition pathways.
Lightweight GenAI for Network Traffic Synthesis: Fidelity, Augmentation, and Classification
This paper explores lightweight Generative AI (GenAI) models for network traffic synthesis to address data scarcity and privacy in Network Traffic Classification (NTC). It evaluates transformer-based, state-space, and diffusion models, demonstrating their effectiveness in generating high-fidelity synthetic traffic for training and augmenting NTC systems.
POP-CORN: Validation of a new coronal hole detection tool based on neural networks
This paper introduces POP-CORN, a novel neural network-based tool for automatically detecting coronal hole boundaries in solar extreme ultraviolet images. By incorporating categorical features of large-scale solar structures, the model accurately determines optimal intensity thresholds for consistent coronal hole identification across different solar cycles, offering a significant advancement in space weather forecasting.
A Unified Memory Perspective for Probabilistic Trustworthy AI
Trustworthy AI systems increasingly rely on probabilistic computation, shifting performance bottlenecks from arithmetic to memory, which must deliver both data and randomness. This paper introduces a unified data-access perspective, treating deterministic access as a limiting case of stochastic sampling, to analyze and address these new memory challenges.
The enrichment paradox: critical capability thresholds and irreversible dependency in human-AI symbiosis
A novel dynamical systems model predicts a critical AI capability threshold, beyond which human skills abruptly collapse due to delegation, a phenomenon termed the "enrichment paradox." The model suggests that periodic AI failures and mandatory practice can paradoxically preserve human capabilities and highlights the irreversible nature of profound skill loss.