期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Robust signal recognition algorithm based on machine learning in heterogeneous networks
1
作者 Xiaokai Liu Rong Li +1 位作者 Chenglin Zhao Pengbiao Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第2期333-342,共10页
There are various heterogeneous networks for terminals to deliver a better quality of service. Signal system recognition and classification contribute a lot to the process. However, in low signal to noise ratio(SNR)... There are various heterogeneous networks for terminals to deliver a better quality of service. Signal system recognition and classification contribute a lot to the process. However, in low signal to noise ratio(SNR) circumstances or under time-varying multipath channels, the majority of the existing algorithms for signal recognition are already facing limitations. In this series, we present a robust signal recognition method based upon the original and latest updated version of the extreme learning machine(ELM) to help users to switch between networks. The ELM utilizes signal characteristics to distinguish systems. The superiority of this algorithm lies in the random choices of hidden nodes and in the fact that it determines the output weights analytically, which result in lower complexity. Theoretically, the algorithm tends to offer a good generalization performance at an extremely fast speed of learning. Moreover, we implement the GSM/WCDMA/LTE models in the Matlab environment by using the Simulink tools. The simulations reveal that the signals can be recognized successfully to achieve a 95% accuracy in a low SNR(0 dB) environment in the time-varying multipath Rayleigh fading channel. 展开更多
关键词 heterogeneous networks automatic signal classification extreme learning machine(ELM) features-extracted Rayleigh fading channel
下载PDF
Single channel speech enhancement via time-frequency dictionary learning 被引量:6
2
作者 HUANG Jianjun ZHANG Xiongwei +1 位作者 ZHANG Yafei ZOU Xia 《Chinese Journal of Acoustics》 2013年第1期90-102,共13页
A time-frequency dictionary learning approach is proposed to enhance speech con- taminated by additive nonstationary noise. In this framework, a time-frequency dictionary which is learned from noise data is incorporat... A time-frequency dictionary learning approach is proposed to enhance speech con- taminated by additive nonstationary noise. In this framework, a time-frequency dictionary which is learned from noise data is incorporated into the convolutive nonnegative matrix fac- torization framework. The update rules for the time-varying gains and speech dictionary are derived by precomputing the noise dictionary. The magnitude spectra of speech are estimated using convolution operation between the learned speech dictionary and the time-varying gains. Finally, noise is removed via binary time-frequency masking. The experimental results indi- cate that the proposed scheme gives better enhancement results in terms of quality measures of speech. Moreover, the proposed algorithm outperforms the multiband spectra subtraction and the non-negative sparse coding based noise reduction algorithm in nonstationary noise conditions. 展开更多
关键词 TIME WORK In STFT Single channel speech enhancement via time-frequency dictionary learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部