期刊文献+

深度学习可解释性内涵和分类

Deep Learning Interpretable Connotation and Classification
下载PDF
导出
摘要 近年来,深度学习在多个行业得到了广泛应用,效果显著。深度学习虽然具有数学统计原理基础,但是对于任务知识表征学习尚缺乏明确解释。对深度学习理论研究的缺乏将导致即时可以通过各种训练方法使得模型得到满意输出,但不能解释模型内部究竟如何进行工作才得到有效结果。本文从深度学习可解释性内涵和分类角度出发,阐述了深度学习可解释性,以期有助于其他学者研究。 In recent years, deep learning has been widely used in many industries with remarkable results. Although deep learning is based on mathematical and statistical principles, it lacks a clear explanation for task knowledge representation learning.The lack of theoretical research on deep learning will lead to satisfactory output of the model through various training methods, but it cannot explain how the model works inside to obtain effective results. This paper expounds the interpretability of deep learning from the perspective of its connotation and classification, hoping to help other scholars to study it.
作者 于芝枝 Yu Zhizhi(Guangdong Patent Examination Cooperation Center,Patent Office of the State Intellectual Property Office,Guangzhou 510535)
出处 《现代计算机》 2022年第20期68-70,共3页 Modern Computer
关键词 可解释性 深度学习 分类 interpretability deep learning classification
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部