期刊文献+

一种深度生成模型的超参数自适应优化法 被引量:5

A Method of Adaptive Hyperparameter Optimization for Deep Generative Models
下载PDF
导出
摘要 深度生成模型在非监督特征提取上有着优异的性能,在人工智能的很多领域取得巨大成功。无论在科学实验还是实际应用中,为深度生成模型设置合适的超参数始终是一大难题。传统超参数优化大多基于领域专家的经验或基于黑盒的贝叶斯优化策略。深度生成模型的超参数众多且训练所需计算资源大,传统办法无法胜任对其进行高效优化。提出一种基于神经元稀疏度的超参数自适应优化方法。该方法根据模型在逐层训练过程中神经元激活的稀疏度,利用高斯过程实时预测最佳超参数组合,将其应用到下一迭代中。实验结果显示该方法不仅明显提升深度生成模型的性能,在效率和精度上也明显优于其它主流优化方法。 Deep generative models(DGMs)can unsupervised extract features with high performances,and achieve great successes in many areas of artificial intelligence.However,it is always a challenge to set proper hyperparameters for DGMs in experiments or real applications.Traditionally,the hyperparameters are chosen based on the experiences of domain experts,or can be optimized automatically by the Bayesian method based on Black-box functions.All these kinds of methods will not be applied to DGMs successfully due to the too many hyperparameters of the models and the high demands of computational resources.This paper proposes a method of adaptive hyperparameters optimization based sparsity of the units.Specifically,this method can predict the optimal candidate of the hyperparameters according to the sparsity of the activated hidden units by use of Gaussian process(GP).The experiments show that this method can not only improve the performance of the DGMs,but also achieve higher efficiency and accuracy than the prominent optimization methods.
作者 姚诚伟 陈根才 YAO Chengwei;CHEN Gencai(College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China)
出处 《实验室研究与探索》 CAS 北大核心 2018年第2期48-53,共6页 Research and Exploration In Laboratory
关键词 深度生成模型 超参数 贝叶斯优化 非监督学习 deep generative models hyperparameters Bayesian optimization unsupervised learning
  • 相关文献

同被引文献45

引证文献5

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部