AI Summary • Published on Feb 17, 2026
The increasing adoption of Deep Learning (DL) in industrial settings, such as for fault detection and diagnosis in chemical processes, faces a significant challenge: a lack of transparency. End-users often question the trustworthiness of decisions made by complex DL models because the reasoning behind these insights is opaque. While eXplainable Artificial Intelligence (XAI) methods have gained traction, especially in image classification and natural language processing, there is limited research on applying XAI to DL models that handle multivariate time series data, particularly in chemical processes. This gap hinders the broader adaptation of DL solutions in real-life industrial environments and raises questions about why a classifier makes a specific decision.
This study proposes an approach to build trust in machine learning model decisions for complex chemical processes by explaining the choices of a highly accurate Long Short-Time Memory (LSTM) classifier used for fault detection and diagnosis. The classifier was trained to identify faults in the benchmark Tennessee Eastman Process (TEP), a non-linear chemical process widely used for evaluating fault detection frameworks. The research evaluates two state-of-the-art, model-agnostic, post-hoc eXplainable Artificial Intelligence (XAI) methods: Integrated Gradients (IG) and SHapley Additive exPlanations (SHAP). Both IG, which attributes prediction scores by accumulating gradients, and SHAP, which uses Shapley values from game theory to estimate feature contributions, are applied after the LSTM model's training phase. The explainability results are then validated against existing process knowledge to assess their plausibility and reasonableness when applied to time series data from a chemical process.
The application of Integrated Gradients (IG) and SHAP to explain the LSTM classifier's decisions on the Tennessee Eastman Process yielded reasonable results. In most cases, both XAI methods largely agreed on the set of features deemed most important for the classifier's decisions, helping to identify the subsystem where a fault occurred. For faults directly affecting the reactor, three specific features were consistently highlighted as crucial. When considering other faults, the methods logically indicated manipulated variables corresponding to the root cause, such as identifying the valve for reactant A during a feed loss. However, for certain faults (e.g., IDV 8, IDV 12, IDV 18, IDV 20), SHAP appeared to be more informative than IG. Specifically, for IDV 8 (random variation in stream 4 composition), SHAP's indicated variables showed greater error variation, suggesting it might be closer to the root cause, whereas IG focused on variables further affected by the control system. Overall, both IG and SHAP consistently produced valuable insights, elucidating the obscure decision-making points of deep learning models in this chemical process application.
The findings demonstrate that state-of-the-art eXplainable Artificial Intelligence (XAI) methods like Integrated Gradients (IG) and SHAP can effectively interpret the complex decisions of deep learning models for fault detection in industrial chemical processes. By providing transparent insights into which process variables are most influential for a fault diagnosis, these methods enhance the trustworthiness of AI systems for end-users and domain experts. The model-agnostic nature of the chosen post-hoc XAI techniques means that the proposed approach is not limited to the Tennessee Eastman Process or the specific LSTM architecture used. This broad applicability allows the methodology to be adapted and utilized across a wide variety of similar industrial and chemical process applications, paving the way for more reliable and understandable AI deployments in the Fourth Industrial Revolution.