摘要
Channel prediction is an effective approach for reducing the feedback or estimation overhead in massive multi-input multi-output (m-MIMO) systems. However, existing channel prediction methods lack precision due to model mismatch errors or network generalization issues. Large language models (LLMs) have demonstrated powerful modeling and generalization abilities, and have been successfully applied to cross-modal tasks, including the time series analysis. Leveraging the expressive power of LLMs, we propose a pre-trained LLM-empowered channel prediction(LLM4CP)method to predict the future downlink channel state information (CSI) sequence based on the historical uplink CSI sequence. We fine-tune the network while freezing most of the parameters of the pre-trained LLM for better cross-modality knowledge transfer. To bridge the gap between the channel data and the feature space of the LLM,preprocessor, embedding, and output modules are specifically tailored by taking into account unique channel characteristics. Simulations validate that the proposed method achieves state-of-the-art (SOTA) prediction performance on full-sample, few-shot, and generalization tests with low training and inference costs.
基金
supported in part by the National Natural Science Foundation of China under Grants 62125101 and 62341101
in part by the New Cornerstone Science Foundation through the XPLORER PRIZE
in part by Guangdong Provincial Key Lab of Integrated Communication,Sensing and Computation for Ubiquitous Internet of Things under Grant 2023B1212010007
in part by Guangzhou Municipal Science and Technology Project under Grant 2023A03J0011
in part by Guangdong Provincial Department of Education Major Research Project under Grant 2023ZDZX1037.