摘要
Neural network models for audio tasks,such as automatic speech recognition(ASR)and acoustic scene classification(ASC),are susceptible to noise contamination for real-life applications.To improve audio quality,an enhancement module,which can be developed independently,is explicitly used at the front-end of the target audio applications.In this paper,we present an end-to-end learning solution to jointly optimise the models for audio enhancement(AE)and the subsequent applications.To guide the optimisation of the AE module towards a target application,and especially to overcome difficult samples,we make use of the sample-wise performance measure as an indication of sample importance.In experiments,we consider four representative applications to evaluate our training paradigm,i.e.,ASR,speech command recognition(SCR),speech emotion recognition(SER),and ASC.These applications are associated with speech and nonspeech tasks concerning semantic and non-semantic features,transient and global information,and the experimental results indicate that our proposed approach can considerably boost the noise robustness of the models,especially at low signal-to-noise ratios,for a wide range of computer audition tasks in everyday-life noisy environments.
作者
Manuel Milling
Shuo Liu
Andreas Triantafyllopoulos
Ilhan Aslan
Björn W.Schuller
Manuel Milling;刘硕;Andreas Triantafyllopoulos;Ilhan Aslan;Björn W.Schuller(Chair of Embedded Intelligence for Health Care and Wellbeing,University of Augsburg,Augsburg 86159,Germany;Chair of Health Informatics,München rechts der Isar,Technical University of Munich,Munich 81675,Germany;Munich Center for Machine Learning,Munich 80333,Germany;Huawei Technologies,Munich,Munich 80992,Germany;Munich Data Science Institute,Garching 85748,Germany;Group on Language,Audio and Music,Imperial College London,London SW72AZ,U.K.)
基金
supported by the Affective Computing&HCI Innovation Research Lab between Huawei Technologies and the University of Augsburg,and the EU H2020 Project under Grant No.101135556(INDUX-R).