期刊文献+

Improvement of joint optimization of masks and deep recurrent neural networks for monaural speech separation using optimized activation functions 被引量:2

原文传递
导出
摘要 Single channel speech separation was a challenging task for speech separation community for last three decades.It is now possible to separate speeches using deep neural networks(DNN)and deep recurrent neural networks(DRNN)due to deep learning.Researchers are now trying to improve different models of DNN and DRNN for monaural speech separation.In this paper,we have tried to improve existing DRNN and DNN based model for speech separation by using optimized activation functions.Instead of using rectified linear unit(RELU),we have implemented leaky RELU,exponential linear unit,exponential function,inverse square root linear unit and inverse cubic root linear unit(ICRLU)as activation functions.ICRLU and exponential function are new activation functions proposed in this research work.These activation functions have overcome the dying RELU problem.They have achieved better separation results in comparison with RELU function and they have also reduced the computational cost of DNN and DRNN based monaural speech separation.
出处 《Chinese Journal of Acoustics》 CSCD 2020年第3期420-432,共13页 声学学报(英文版)
基金 supported by the National Natural Science Foundation of China(61671418) the Advanced Research Fund of University of Science and Technology of China。
  • 相关文献

同被引文献26

引证文献2

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部