Background Many factors interfering with a listener attempting to grasp speech in noisy environments. The spatial hearing by which speech and noise can be spatially separated may play a crucial role in speech recognit...Background Many factors interfering with a listener attempting to grasp speech in noisy environments. The spatial hearing by which speech and noise can be spatially separated may play a crucial role in speech recognition in the presence of competing noise. This study aimed to assess whether, and to what degree, spatial hearing benefit speech recognition in young normal-hearing participants in both quiet and noisy environments. Methods Twenty-eight young participants were tested by Mandarin Hearing In Noise Test (MHINT) in quiet and noisy environments. The assessment method used was characterized by modifications of speech and noise configurations, as well as by changes of speech presentation mode. The benefit of spatial hearing was measured by speech recognition threshold (SRT) variation between speech condition 1 (SC1) and speech condition 2 (SC2). Results There was no significant difference found in the SRT between SC1 and SC2 in quiet. SRT in SC1 was about 4.2 dB lower than that in SC2, both in speech-shaped and four-babble noise conditions. SRTs measured in both SC1 and SC2 were lower in the speech-shaped noise condition than in the four-babble noise condition. Conclusion Spatial hearing in young normal-hearing participants contribute to speech recognition in noisy environments, but provide no benefit to speech recognition in quiet environments, which may be due to the offset of auditory extrinsic redundancy against the lack of spatial hearing.展开更多
In the research on spatial hearing and realization of virtual auditory space,it is important to effectively model the head-related transfer functions(HRTFs)or head-related impulse responses(HRIRs).In our study,we mana...In the research on spatial hearing and realization of virtual auditory space,it is important to effectively model the head-related transfer functions(HRTFs)or head-related impulse responses(HRIRs).In our study,we managed to carry out adaptive non-linear approximation in the field of wavelet transformation.The results show that the HRIRs’adaptive non-linear approximation model is a more effective data reduction model,is faster,and is 5 dB on average better than the traditional principal component analysis(PCA)(Karhunen-Loève transform)model based on relative mean square error(MSE)criterion.Furthermore,we also discussed the best bases’choice for the time-frequency representation of HRIRs,and the results show that local cosine bases are more propitious to HRIRs’adaptive approximation than wavelet and wavelet packet base.However,the improved effect of local cosine bases is not distinct.Here,for the sake of modeling the HRIRs more truthfully,we consider choosing optimal time-frequency atoms from redundant dictionary to decompose this kind of signals of HRIRs and achieve better results than all the previous models.展开更多
基金This research was supported by a grant from the National Natural Science Foundation of China (No. 30973309).
文摘Background Many factors interfering with a listener attempting to grasp speech in noisy environments. The spatial hearing by which speech and noise can be spatially separated may play a crucial role in speech recognition in the presence of competing noise. This study aimed to assess whether, and to what degree, spatial hearing benefit speech recognition in young normal-hearing participants in both quiet and noisy environments. Methods Twenty-eight young participants were tested by Mandarin Hearing In Noise Test (MHINT) in quiet and noisy environments. The assessment method used was characterized by modifications of speech and noise configurations, as well as by changes of speech presentation mode. The benefit of spatial hearing was measured by speech recognition threshold (SRT) variation between speech condition 1 (SC1) and speech condition 2 (SC2). Results There was no significant difference found in the SRT between SC1 and SC2 in quiet. SRT in SC1 was about 4.2 dB lower than that in SC2, both in speech-shaped and four-babble noise conditions. SRTs measured in both SC1 and SC2 were lower in the speech-shaped noise condition than in the four-babble noise condition. Conclusion Spatial hearing in young normal-hearing participants contribute to speech recognition in noisy environments, but provide no benefit to speech recognition in quiet environments, which may be due to the offset of auditory extrinsic redundancy against the lack of spatial hearing.
基金supported by the National Basic Research of China(No.2002CB312102).
文摘In the research on spatial hearing and realization of virtual auditory space,it is important to effectively model the head-related transfer functions(HRTFs)or head-related impulse responses(HRIRs).In our study,we managed to carry out adaptive non-linear approximation in the field of wavelet transformation.The results show that the HRIRs’adaptive non-linear approximation model is a more effective data reduction model,is faster,and is 5 dB on average better than the traditional principal component analysis(PCA)(Karhunen-Loève transform)model based on relative mean square error(MSE)criterion.Furthermore,we also discussed the best bases’choice for the time-frequency representation of HRIRs,and the results show that local cosine bases are more propitious to HRIRs’adaptive approximation than wavelet and wavelet packet base.However,the improved effect of local cosine bases is not distinct.Here,for the sake of modeling the HRIRs more truthfully,we consider choosing optimal time-frequency atoms from redundant dictionary to decompose this kind of signals of HRIRs and achieve better results than all the previous models.