AI Summary • Published on Mar 14, 2026
The increasing complexity of evolving mobile networks, particularly for future 6G systems incorporating advanced technologies like massive MIMO, necessitates new management approaches beyond traditional algorithms. While AI/ML offers transformative potential, its practical integration into real-world mobile networks is hampered by issues such as high computational demands, limited generalization capabilities across diverse scenarios, and underdeveloped data collection methods. Despite initial standardization efforts by the 3GPP (3rd Generation Partnership Project) to integrate AI/ML, a comprehensive review of the current status and the associated challenges in this standardization process, especially concerning Release 18 and future Release 19, remains largely unaddressed.
The authors conducted a comprehensive review of 3GPP Release 18 standardization efforts for AI/ML in the New Radio (NR) air interface. This review details the general AI/ML life cycle management (LCM) framework, which includes data collection, model training, management, inference, and model storage. It further outlines three key air-interface use cases identified by 3GPP: Channel State Information (CSI) feedback, beam management, and positioning, along with their corresponding common and specific Key Performance Indicators (KPIs) for evaluation. To provide practical insights, a case study on CSI feedback was performed comparing multilayer perceptron (MLP), Convolutional Neural Network (CNN), and Transformer-based models. These models were evaluated for performance, generalization, and computational complexity using two DeepMIMO datasets (outdoor O1 and indoor I3) under various training and testing scenarios, including cross-scenario generalization tests (Case 1, 2, and 3 as defined by 3GPP).
The review highlights 3GPP's foundational work in defining an AI/ML LCM framework and specific use cases with tailored KPIs for mobile networks. The CSI feedback case study revealed that all evaluated models showed degraded NMSE with increased compression ratios. Transformer-based models achieved superior NMSE performance, sometimes below -20 dB, but at the cost of significantly higher computational complexity (approximately 10 times more FLOPs). A crucial finding was the superior performance and generalization of the pre-training-fine-tuning paradigm (training on one dataset and fine-tuning on another) compared to training solely on a single dataset, indicating its effectiveness in adapting to new environments. Transformer-based models demonstrated the most substantial performance enhancements during fine-tuning, especially at high compression ratios, due to their ability to capture global dependencies via self-attention mechanisms. The paper also identifies key challenges: the lack of standardized, high-quality common datasets, insufficient generalization evaluation methodologies, and the absence of recommended baseline AI/ML models by 3GPP.
The paper emphasizes the need for future research to extend AI/ML standardization beyond the physical layer to higher layers, such as the Medium Access Control (MAC) layer, to effectively manage real-time network tasks like resource allocation. Developing multi-task AI/ML models capable of handling interconnected network functions and optimizing across diverse evaluation metrics is also a critical direction. Furthermore, there is a call for robust theoretical methodologies to analyze model performance and architecture, establishing quantitative relationships between model selection, Quality of Service (QoS), and available resources for reliable service delivery. The findings from the CSI feedback case study suggest that leveraging pre-training-fine-tuning strategies and advanced architectures like Transformers can significantly improve generalization capabilities in diverse mobile network environments, providing a pathway for more robust and adaptable AI/ML deployments.