AI Summary • Published on Jan 1, 2026
This paper addresses the concerns surrounding the uncritical application of Artificial Intelligence (AI) and Deep Machine Learning (ML) for downscaling global climate model (GCM) simulations to project future climate conditions. A primary issue highlighted is that recent studies often neglect or provide an incomplete account of established statistical and mathematical downscaling methods, and may utilize inappropriate evaluation strategies. This can create a misleading impression of AI/ML's superiority. Fundamentally, AI/ML models trained on historical data struggle with "out-of-distribution" performance when applied to a future climate where conditions are inherently non-stationary and different from their training context.
The author employs a critical review methodology, leveraging decades of experience and well-established principles from empirical-statistical downscaling (ESD) to scrutinize current practices in applying AI/ML to climate downscaling. The approach involves a direct comparison of the strengths and weaknesses of AI/ML with traditional statistics and mathematics-based downscaling techniques. The review specifically examines common AI/ML evaluation strategies, pointing out the tendency to benchmark against suboptimal ESD methods or to focus on reproducing present weather rather than assessing skill in projecting future climate changes. The paper also emphasizes the theoretical underpinnings of downscaling, distinguishing it from simple interpolation or bias adjustment.
The review concludes that despite AI/ML's impressive general successes, its direct application to downscaling *future* climate change projections requires significant caution. Key findings indicate that AI/ML methods face substantial challenges due to the non-stationary nature of climate change, leading to issues like "out-of-distribution" performance and potential "hallucinations" because historical training data may not accurately represent future climate conditions. Many evaluations of AI/ML in downscaling are found to use inadequate benchmarks or prioritize the simulation of present weather, thereby overstating their actual capability for climate change scenarios. In contrast, well-designed ESD methods, grounded in mathematical theory and statistical principles, are presented as more robust, computationally efficient, and better suited for handling data scarcity and non-stationarity, especially when the goal is to predict changes in statistical parameters and extreme events rather than individual weather outcomes.
The paper carries significant implications for climate scientists and policymakers, urging a more critical and informed integration of AI/ML into climate downscaling. It stresses the necessity of developing rigorous evaluation protocols that genuinely test a method's ability to project *future climate change*, moving beyond mere historical data reproduction. The author suggests that AI/ML might be more effectively utilized for tasks such as emulating regional climate models or extracting patterns from large datasets, rather than direct downscaling of future GCM projections without careful consideration of non-stationarity. Ultimately, the paper advocates for increased transparency, a deeper understanding of underlying physical processes, and a balanced consideration of both established statistical techniques and novel AI/ML methods to ensure accurate climate change assessments and prevent maladaptation.