For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed...For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios.展开更多
The rapid variation of channel can induce the intercarrier interference in orthogonal frequency-division multiplexing (OFDM) systems. Intercarrier interference will significantly increase the difficulty of OFDM chan...The rapid variation of channel can induce the intercarrier interference in orthogonal frequency-division multiplexing (OFDM) systems. Intercarrier interference will significantly increase the difficulty of OFDM channel estimation because too many channel coefficients need be estimated. In this article, a novel channel estimator is proposed to resolve the above problem. This estimator consists of two parts: the channel parameter estimation unit (CPEU), which is used to estimate the number of channel taps and the multipath time delays, and the channel coefficient estimation unit (CCEU), which is used to estimate the channel coefficients by using the estimated channel parameters provided by CPEU. In CCEU, the over-sampling basis expansion model is resorted to solve the problem that a large number of channel coefficients need to be estimated. Finally, simulation results are given to scale the performance of the proposed scheme.展开更多
Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next gene...Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next generation of mobile communication.However,the inter-Doppler interference(IDI)problem caused by fractional Doppler poses great challenges to channel estimation.To avoid this problem,this paper proposes a joint time and delayDoppler(DD)domain based on sparse Bayesian learning(SBL)channel estimation algorithm.Firstly,we derive the original channel response(OCR)from the time domain channel impulse response(CIR),which can reflect the channel variation during one OTFS symbol.Compare with the traditional channel model,the OCR can avoid the IDI problem.After that,the dimension of OCR is reduced by using the basis expansion model(BEM)and the relationship between the time and DD domain channel model,so that we have turned the underdetermined problem into an overdetermined problem.Finally,in terms of sparsity of channel in delay domain,SBL algorithm is used to estimate the basis coefficients in the BEM without any priori information of channel.The simulation results show the effectiveness and superiority of the proposed channel estimation algorithm.展开更多
The highly dynamic channel(HDC)in an extremely dynamic environment mainly has fast timevarying nonstationary characteristics.In this article,we focus on the most difficult HDC case,where the channel coherence time is ...The highly dynamic channel(HDC)in an extremely dynamic environment mainly has fast timevarying nonstationary characteristics.In this article,we focus on the most difficult HDC case,where the channel coherence time is less than the symbol period.To this end,we propose a symbol detector based on a long short-term memory(LSTM)neural network.Taking the sampling sequence of each received symbol as the LSTM unit's input data has the advantage of making full use of received information to obtain better performance.In addition,using the basic expansion model(BEM)as the preprocessing unit significantly reduces the number of neural network parameters.Finally,the simulation part uses the highly dynamic plasma sheath channel(HDPSC)data measured from shock tube experiments.The results show that the proposed BEM-LSTM-based detector has better performance and does not require channel estimation or channel model information.展开更多
High data transmission rates and high mobility give rise to time- and frequency-selectivity in wireless communication channels. This paper investigated time and frequency doubly selective channel estimation using pilo...High data transmission rates and high mobility give rise to time- and frequency-selectivity in wireless communication channels. This paper investigated time and frequency doubly selective channel estimation using pilot tones among Multi-Input Multi-Output (MIMO) orthogonal frequency division multiplexing (OFDM). Firstly, a complex exponential basis expansion channel model (BECM) was introduced to represent doubly selective channel during an OFDM symbol period; then, based on BECM, an effective MIMO-OFDM doubly selective channel estimation method was presented; finally, the optimality in designing pilot tones parameters was done according to the minimization of channel estimation MSE, mainly including the number, the placement and the structure of pilot tones. Simulation results show that the proposed estimation method has good performance in doubly selective channel scenarios and confirms the theoretical analysis findings.展开更多
基金Supported by the National Science Foundation Program of Jiangsu Province (No.BK20191378)the National Science Research Project of Jiangsu Higher Education Institutions (No.18KJB510034)+2 种基金China Postdoctoral Science Fund Special Funding Project (No.2018T110530)the Key Technologies R&D Program of Jiangsu Province (No.BE2022067,BE2022067-2)Major Research Program Key Project(No.92067201)。
文摘For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios.
文摘The rapid variation of channel can induce the intercarrier interference in orthogonal frequency-division multiplexing (OFDM) systems. Intercarrier interference will significantly increase the difficulty of OFDM channel estimation because too many channel coefficients need be estimated. In this article, a novel channel estimator is proposed to resolve the above problem. This estimator consists of two parts: the channel parameter estimation unit (CPEU), which is used to estimate the number of channel taps and the multipath time delays, and the channel coefficient estimation unit (CCEU), which is used to estimate the channel coefficients by using the estimated channel parameters provided by CPEU. In CCEU, the over-sampling basis expansion model is resorted to solve the problem that a large number of channel coefficients need to be estimated. Finally, simulation results are given to scale the performance of the proposed scheme.
基金supported by the Natural Science Foundation of Chongqing(No.cstc2019jcyj-msxmX0017)。
文摘Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next generation of mobile communication.However,the inter-Doppler interference(IDI)problem caused by fractional Doppler poses great challenges to channel estimation.To avoid this problem,this paper proposes a joint time and delayDoppler(DD)domain based on sparse Bayesian learning(SBL)channel estimation algorithm.Firstly,we derive the original channel response(OCR)from the time domain channel impulse response(CIR),which can reflect the channel variation during one OTFS symbol.Compare with the traditional channel model,the OCR can avoid the IDI problem.After that,the dimension of OCR is reduced by using the basis expansion model(BEM)and the relationship between the time and DD domain channel model,so that we have turned the underdetermined problem into an overdetermined problem.Finally,in terms of sparsity of channel in delay domain,SBL algorithm is used to estimate the basis coefficients in the BEM without any priori information of channel.The simulation results show the effectiveness and superiority of the proposed channel estimation algorithm.
基金supported in part by the National Key R&D Program of China under Grant 2020YFA0711301in part by the National Natural Science Foundation of China(No.61941104,62101292,61922049)。
文摘The highly dynamic channel(HDC)in an extremely dynamic environment mainly has fast timevarying nonstationary characteristics.In this article,we focus on the most difficult HDC case,where the channel coherence time is less than the symbol period.To this end,we propose a symbol detector based on a long short-term memory(LSTM)neural network.Taking the sampling sequence of each received symbol as the LSTM unit's input data has the advantage of making full use of received information to obtain better performance.In addition,using the basic expansion model(BEM)as the preprocessing unit significantly reduces the number of neural network parameters.Finally,the simulation part uses the highly dynamic plasma sheath channel(HDPSC)data measured from shock tube experiments.The results show that the proposed BEM-LSTM-based detector has better performance and does not require channel estimation or channel model information.
基金This work was supported by the National Natural Science Foundation of China(Grant No.60272046) the Major Project(Grant No.604963 10)+2 种基金National High Technology Project of China(Grant No.2002AA123031)National Natural Science Foundation of Jiangsu Province(Grant No.BK2005061) the Grant of PhD Programmes in High Education Institutes of MOE(Grant No.20020286014).
文摘High data transmission rates and high mobility give rise to time- and frequency-selectivity in wireless communication channels. This paper investigated time and frequency doubly selective channel estimation using pilot tones among Multi-Input Multi-Output (MIMO) orthogonal frequency division multiplexing (OFDM). Firstly, a complex exponential basis expansion channel model (BECM) was introduced to represent doubly selective channel during an OFDM symbol period; then, based on BECM, an effective MIMO-OFDM doubly selective channel estimation method was presented; finally, the optimality in designing pilot tones parameters was done according to the minimization of channel estimation MSE, mainly including the number, the placement and the structure of pilot tones. Simulation results show that the proposed estimation method has good performance in doubly selective channel scenarios and confirms the theoretical analysis findings.