Audio‐visual wake word spotting is a challenging multi‐modal task that exploits visual information of lip motion patterns to supplement acoustic speech to improve overall detection performance.However,most audio‐vi...Audio‐visual wake word spotting is a challenging multi‐modal task that exploits visual information of lip motion patterns to supplement acoustic speech to improve overall detection performance.However,most audio‐visual wake word spotting models are only suitable for simple single‐speaker scenarios and require high computational complexity.Further development is hindered by complex multi‐person scenarios and computational limitations in mobile environments.In this paper,a novel audio‐visual model is proposed for on‐device multi‐person wake word spotting.Firstly,an attention‐based audio‐visual voice activity detection module is presented,which generates an attention score matrix of audio and visual representations to derive active speaker representation.Secondly,the knowledge distillation method is introduced to transfer knowledge from the large model to the on‐device model to control the size of our model.Moreover,a new audio‐visual dataset,PKU‐KWS,is collected for sentence‐level multi‐person wake word spotting.Experimental results on the PKU‐KWS dataset show that this approach outperforms the previous state‐of‐the‐art methods.展开更多
Experimental single case studies on automatic processing of emotion were carried on a sample of people with an anxiety disorder. Participants were required to take three Audio Visual Entrainment (AVE) sessions to test...Experimental single case studies on automatic processing of emotion were carried on a sample of people with an anxiety disorder. Participants were required to take three Audio Visual Entrainment (AVE) sessions to test for anxiety reduction as proclaimed by some academic research. Explicit reports were measured as well as pre-attentive bias to stressing information by using affective priming studies before and after AVE intervention. Group analysis shows that indeed AVEs program applications do reduce anxiety producing significant changes over explicit reports on anxiety levels and automatic processing bias of emotion. However, case by case analysis of six anxious participants shows that even when all of the participants report emotional improvement after intervention, not all of them reduce or eliminate dysfunctional bias to stressing information. Rather, they show a variety of processing styles due to intervention and some of them show no change at all. Implications of this differential effect to clinical sets are discussed.展开更多
基金supported by the National Key R&D Program of China(No.2020AAA0108904)the Science and Technology Plan of Shenzhen(No.JCYJ20200109140410340).
文摘Audio‐visual wake word spotting is a challenging multi‐modal task that exploits visual information of lip motion patterns to supplement acoustic speech to improve overall detection performance.However,most audio‐visual wake word spotting models are only suitable for simple single‐speaker scenarios and require high computational complexity.Further development is hindered by complex multi‐person scenarios and computational limitations in mobile environments.In this paper,a novel audio‐visual model is proposed for on‐device multi‐person wake word spotting.Firstly,an attention‐based audio‐visual voice activity detection module is presented,which generates an attention score matrix of audio and visual representations to derive active speaker representation.Secondly,the knowledge distillation method is introduced to transfer knowledge from the large model to the on‐device model to control the size of our model.Moreover,a new audio‐visual dataset,PKU‐KWS,is collected for sentence‐level multi‐person wake word spotting.Experimental results on the PKU‐KWS dataset show that this approach outperforms the previous state‐of‐the‐art methods.
文摘Experimental single case studies on automatic processing of emotion were carried on a sample of people with an anxiety disorder. Participants were required to take three Audio Visual Entrainment (AVE) sessions to test for anxiety reduction as proclaimed by some academic research. Explicit reports were measured as well as pre-attentive bias to stressing information by using affective priming studies before and after AVE intervention. Group analysis shows that indeed AVEs program applications do reduce anxiety producing significant changes over explicit reports on anxiety levels and automatic processing bias of emotion. However, case by case analysis of six anxious participants shows that even when all of the participants report emotional improvement after intervention, not all of them reduce or eliminate dysfunctional bias to stressing information. Rather, they show a variety of processing styles due to intervention and some of them show no change at all. Implications of this differential effect to clinical sets are discussed.