AI Summary • Published on Feb 25, 2026
Artificial intelligence (AI) models often face inference errors stemming from both aleatoric and epistemic uncertainties. While aleatoric uncertainty is due to inherent data randomness, epistemic uncertainty arises from the model's imperfect learning. Addressing epistemic uncertainty typically demands very large neural networks and extensive training datasets, which significantly increases complexity and computational costs, particularly in challenging environments like large Multi-Input Multi-Output (MIMO) systems with high-order quadrature amplitude modulation (QAM). This paper investigates whether it's possible to enhance inference accuracy and reduce epistemic uncertainty in an already trained AI model, thereby circumventing the need for more complex models or additional training.
The proposed solution is a "resampling" technique applied during the inference stage of a trained AI model. This method exploits the concept of invariant transformations, where an input is modified in ways that preserve the underlying statistical properties of the system (e.g., unitary rotations, complex conjugation, or permutations in MIMO systems). The trained AI model then processes multiple such transformed versions of the original input, generating several inference outputs. The key insight is that the estimation errors from these different transformed inferences exhibit partial statistical independence, largely due to epistemic uncertainty. By aggregating these multiple, partially independent inference outputs, the overall estimation error can be significantly reduced. The paper provides a mathematical framework to determine optimal combination weights for these outputs, aiming to minimize the variance of the combined estimate.
The efficacy of the resampling technique was demonstrated through its application to an AI-based MIMO detector. For a 4x4 MIMO system employing 64QAM modulation, combining inferences from an input and its sign-inverted transformation successfully reduced the Symbol Error Rate (SER) from 5.7% to 5.1%. In a more demanding scenario involving an 8x8 MIMO system with 256QAM and a high-rate LDPC code, resampling with four invariant transformations led to notable performance improvements. The results showed approximately 0.5dB gains at a 1% uncoded Bit Error Rate (BER) and 0.7dB gains at a 10% Block Error Rate (BLER), with these benefits becoming more significant as the Signal-to-Noise Ratio (SNR) increased. This suggests that the method is particularly effective when epistemic uncertainty is a dominant factor in the overall inference error.
The proposed resampling technique provides a significant implication for AI model deployment: it offers a robust strategy to improve inference accuracy and reduce epistemic uncertainty in already trained models without the need for additional training or modifications to the model's architecture. By effectively leveraging the inherent invariant properties of a system, this method allows for a more efficient balance between model size and performance. It also suggests a new perspective on mitigating epistemic uncertainty, shifting some of the burden from the training phase to the inference phase, especially in scenarios where achieving very high precision with large networks and extensive data is impractical. The approach can serve as a practical means to enhance the reliability and performance of AI systems in various applications.