期刊文献+

面向飞参数据异常检测的SAE优化方法比较 被引量:2

A comparison for optimization methods of SAE in allusion to flight data abnormity detection
下载PDF
导出
摘要 首先阐述了稀疏自动编码器(SAE)及随机梯度下降法、共轭梯度法和有限内存拟牛顿法等三种优化方法的基本原理,然后选取了某型飞机200架次平稳飞行状态下的4类飞参数据,并根据飞参数据异常检测的实际需求通过实验得到了最佳算法参数和结构参数的配置方式。最后比较了三种优化方法处理不同数据集的重构误差、收敛速度以及样本集受到噪声污染时的重构准确率,从而找出了最适用于飞参数据异常检测的方法,为今后研究提供参考。 Firstly, this paper gives an overview about basic principles of sparse auto-encoders (SAE), stochastic gradient descent method, conjugate gradient method and limited memory quasi-Newton method. Secondly, it selects a certain type of aircraft flight data 200 sorties at steady flight state which are divided into 4 categories. According to the actual needs of the flight data abnormity detection, it designs experiments to find the optimal configuration for algorithm parameters and structural parameters. At last, it makes contradistinction among reconstruction accuracy of the three methods when the data sets subject to different types and levels of the back ground noise. And, it chooses the most suitable method for processing flight data according to the experimental results, which provides a reference for the research in the future.
出处 《信息技术》 2015年第12期181-185,共5页 Information Technology
关键词 飞参数据 异常检测 稀疏自动编码器 优化方法 flight data abnormity detection sparse auto-encoder optimization methods
  • 相关文献

参考文献13

  • 1Hinton G E, Osinder S. A fast learning algorithm for deep belief nets [ J ]. Neural Computation,2006,18 (7) : 1527 - 1554.
  • 2Benjio Y. Learning deep architectures for AI[ J]. Foundations and Trends in Machine Learning,2009,2 ( 1 ) : 1 - 127.
  • 3Arel L, Rose D C, Kamowski T P. Deep machine learning a new fron- tier in artificial intelligence research [ J ]. Computation Intelligence, 2010,5(4) :3 -18.
  • 4Ma Yun-long, Zhang Peng, CAO Yanan. Parallel Auto-encoder for Efficient Outlier Detection [ C ]. Proc of IEEE International Confer- ence on Big Data,2013:15 -17.
  • 5Taylor G, Hinton G E. Auto-encoders unsupervised learning and deep architectures [ J ]. Machine Learning Research, 2012, 12 ( 13 ) : 37 - 50.
  • 6Erhan D, Bengio Y, Couville A; Why does unsupervised pre-training help deep learning [ J ]. Machine Learning Research, 2010,11 (3) : 625 - 660.
  • 7Bordes A, Bottou L, Gallinari P. SGD-QN : Careful Quasi-Newton Sto- chastic Gradient Descent[J]. Machine Learning Research,2009,10 (10) :1737 - 1754.
  • 8Bengio Y, Lamblin P, Popovici D, et al. Greedy layer-wise training of deep networks [ C]. Proceeding of the 20th Annum Conference on Neural Information Processing System,2006:153 -160.
  • 9Liu T W. A regularized limited memory BFGS method for noneonvex unconstrained minimization [ J ]. Numer Algoritm, 2013,65 ( 4 ) : 305 - 323.
  • 10Bergstra J, Bengio Y. Random Search for Hyper-Parameter Optimi- zation [ J ]. Journal of Machine Learning Research, 2012,13 ( 2 ) : 281 - 305.

同被引文献5

引证文献2

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部