期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
AI-Driven FBMC-OQAM Signal Recognition via Transform Channel Convolution Strategy
1
作者 Zeliang An Tianqi Zhang +3 位作者 Debang Liu Yuqing Xu Gert Frølund Pedersen Ming Shen 《Computers, Materials & Continua》 SCIE EI 2023年第9期2817-2834,共18页
With the advent of the Industry 5.0 era,the Internet of Things(IoT)devices face unprecedented proliferation,requiring higher communications rates and lower transmission delays.Considering its high spectrum efficiency,... With the advent of the Industry 5.0 era,the Internet of Things(IoT)devices face unprecedented proliferation,requiring higher communications rates and lower transmission delays.Considering its high spectrum efficiency,the promising filter bank multicarrier(FBMC)technique using offset quadrature amplitude modulation(OQAM)has been applied to Beyond 5G(B5G)industry IoT networks.However,due to the broadcasting nature of wireless channels,the FBMC-OQAMindustry IoT network is inevitably vulnerable to adversary attacks frommalicious IoT nodes.The FBMC-OQAMindustry cognitive radio network(ICRNet)is proposed to ensure security at the physical layer to tackle the above challenge.As a pivotal step of ICRNet,blind modulation recognition(BMR)can detect and recognize the modulation type of malicious signals.The previous works need to accomplish the BMR task of FBMC-OQAM signals in ICRNet nodes.A novel FBMC BMR algorithm is proposed with the transform channel convolution network(TCCNet)rather than a complicated two-dimensional convolution.Firstly,this is achieved by designing a low-complexity binary constellation diagram(BCD)gridding matrix as the input of TCCNet.Then,a transform channel convolution strategy is developed to convert the image-like BCD matrix into a serieslike data format,accelerating the BMR process while keeping discriminative features.Monte Carlo experimental results demonstrate that the proposed TCCNet obtains a performance gain of 8%and 40%over the traditional inphase/quadrature(I/Q)-based and constellation diagram(CD)-based methods at a signal noise ratio(SNR)of 12 dB,respectively.Moreover,the proposed TCCNet can achieve around 29.682 and 2.356 times faster than existing CD-Alex Network(CD-AlexNet)and I/Q-Convolutional Long Deep Neural Network(I/Q-CLDNN)algorithms,respectively. 展开更多
关键词 Intelligent signal recognition FBMC-OQAM industrial cognitive radio networks binary constellation diagram transform channel convolution
下载PDF
Developing phoneme-based lip-reading sentences system for silent speech recognition
2
作者 Randa El-Bialy Daqing Chen +4 位作者 Souheil Fenghour Walid Hussein Perry Xiao Omar HKaram Bo Li 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第1期129-138,共10页
Lip-reading is a process of interpreting speech by visually analysing lip movements.Recent research in this area has shifted from simple word recognition to lip-reading sentences in the wild.This paper attempts to use... Lip-reading is a process of interpreting speech by visually analysing lip movements.Recent research in this area has shifted from simple word recognition to lip-reading sentences in the wild.This paper attempts to use phonemes as a classification schema for lip-reading sentences to explore an alternative schema and to enhance system performance.Different classification schemas have been investigated,including characterbased and visemes-based schemas.The visual front-end model of the system consists of a Spatial-Temporal(3D)convolution followed by a 2D ResNet.Transformers utilise multi-headed attention for phoneme recognition models.For the language model,a Recurrent Neural Network is used.The performance of the proposed system has been testified with the BBC Lip Reading Sentences 2(LRS2)benchmark dataset.Compared with the state-of-the-art approaches in lip-reading sentences,the proposed system has demonstrated an improved performance by a 10%lower word error rate on average under varying illumination ratios. 展开更多
关键词 deep learning deep neural networks LIP-READING phoneme-based lip-reading spatial-temporal convolution transformers
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部