摘要
为提高多声音事件检测任务的性能,本文深入研究速动压缩非对称谐振器级联CARFAC数字耳蜗模型,并提出了基于听觉融合特征的多声音事件检测方法 .该方法首先利用CARFAC提取混叠声音的神经活动模式图NAP,然后将NAP与GFCC拼接后生成融合听觉特征,并将其送入CRNN神经网络进行全监督学习,以实现对城市声音事件的检测.实验表明,在低信噪比且重叠事件较多的情况下,融合听觉特征较单独的NAP、MFCC以及GFCC等特征具有更好的鲁棒性和多声音事件检测性能.
In order to improve the performance of multi-sound event detection task,this paper conducts an in-depth study of the Cascade of Asymmetric Resonators with Fast-Acting Compression(CARFAC)digital cochlear model,and proposes a multi-sound event detection method based on auditory fusion features.Initially,the CARFAC is employed to extract the Neural Activity Pattern(NAP)of mixed sound.Subsequently,the NAP is concatenated with Gammatone Frequency Cepstral Coefficients(GFCC)to generate fused auditory features,which are then fed into a Convolutional Recurrent Neural Network(CRNN)for fully supervised learning to detect urban sound events.Experimental results demonstrate that,in the scenario of low signal-to-noise ratio and a higher number of overlapping events,the fused auditory features exhibit superior robustness and multi-sound event detection performance compared to individual features such as NAP,MFCC,and GFCC.
作者
罗吉
夏秀渝
LUO Ji;XIA Xiu-Yu(College of Electronics and Information Engineering,Sichuan University,Chengdu 610064,China)
出处
《四川大学学报(自然科学版)》
CAS
CSCD
北大核心
2024年第4期225-231,共7页
Journal of Sichuan University(Natural Science Edition)
基金
国家自然科学基金联合基金项目(U1733109)。
关键词
数字耳蜗模型
神经活动模式
融合听觉特征
声音事件检测
四折交叉验证
Digital cochlear model
Neural activity pattern
Fused auditory parameters
Sound event detection
Four-fold cross validation