The head-related transfer function(HRTF)plays a vital role in immersive virtual reality and augmented reality technologies,especially in spatial audio synthesis for binaural reproduction.This article proposes a deep l...The head-related transfer function(HRTF)plays a vital role in immersive virtual reality and augmented reality technologies,especially in spatial audio synthesis for binaural reproduction.This article proposes a deep learning method with generic HRTF amplitudes and anthropometric parameters as input features for individual HRTF generation.By designing fully convolutional neural networks,the key anthropometric parameters and the generic HRTF amplitudes were used to predict each individual HRTF amplitude spectrum in the full-space directions,and the interaural time delay(ITD)was predicted by the transformer module.In the amplitude prediction model,the attention mechanism was adopted to better capture the relationship of HRTF amplitude spectra at two distinctive directions with large angle differences in space.Finally,with the minimum phase model,the predicted amplitude spectrum and ITDs were used to obtain a set of individual head-related impulse responses.Besides the separate training of the HRTF amplitude and ITD generation models,their joint training was also considered and evaluated.The root-mean-square error and the log-spectral distortion were selected as objective measurement metrics to evaluate the performance.Subjective experiments further showed that the auditory source localisation performance of the proposed method was better than other methods in most cases.展开更多
基金National Key Research&Development,R&D Program of China,Grant/Award Number:2021YFB3201702National Natural Science Foundation of China,Grant/Award Number:12074403。
文摘The head-related transfer function(HRTF)plays a vital role in immersive virtual reality and augmented reality technologies,especially in spatial audio synthesis for binaural reproduction.This article proposes a deep learning method with generic HRTF amplitudes and anthropometric parameters as input features for individual HRTF generation.By designing fully convolutional neural networks,the key anthropometric parameters and the generic HRTF amplitudes were used to predict each individual HRTF amplitude spectrum in the full-space directions,and the interaural time delay(ITD)was predicted by the transformer module.In the amplitude prediction model,the attention mechanism was adopted to better capture the relationship of HRTF amplitude spectra at two distinctive directions with large angle differences in space.Finally,with the minimum phase model,the predicted amplitude spectrum and ITDs were used to obtain a set of individual head-related impulse responses.Besides the separate training of the HRTF amplitude and ITD generation models,their joint training was also considered and evaluated.The root-mean-square error and the log-spectral distortion were selected as objective measurement metrics to evaluate the performance.Subjective experiments further showed that the auditory source localisation performance of the proposed method was better than other methods in most cases.