期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Prediction Model of Refined Gasoline Blending Formula Based on PSO-DBN 被引量:2
1
作者 Wang Xiaoming Li Wei +2 位作者 Li Yajie Jiang Dongnian Liang Chenglong 《China Petroleum Processing & Petrochemical Technology》 SCIE CAS 2022年第3期128-138,共11页
To address the problem of the low accuracy of refined gasoline blending formula in the petrochemical industry,the advantages of deep belief networks(DBNs)in feature extraction and nonlinear processing are considered,a... To address the problem of the low accuracy of refined gasoline blending formula in the petrochemical industry,the advantages of deep belief networks(DBNs)in feature extraction and nonlinear processing are considered,and they are applied to the prediction modeling of refined gasoline blending conservative formula.Firstly,based on historical measured data of refined gasoline blending and according to the characteristics of the data set,we use bootstrapping to divide the training data set and the test data set.Secondly,considering that parameter selection for the network is difficult,particle swarm optimization is adopted to improve the related optimal parameters and replace the tedious process of manually selecting parameters,greatly improving optimization efficiency.In addition,the contrastive divergence algorithm is used for unsupervised forward feature learning and supervised reverse fine-tuning of the network,so as to construct a more accurate prediction model for conservative formula.Finally,in order to evaluate the effectiveness of this method,the simulation results are compared with those of traditional modeling methods,which show that the DBNs has better prediction performance than error back propagation and support vector machines,and can provide production guidance for refined gasoline blending formula. 展开更多
关键词 FORMULATION PREDICTION deep belief network contrastive divergence particle swarm optimization
下载PDF
Optimization of deep network models through fine tuning
2
作者 M.Arif Wani Saduf Afzal 《International Journal of Intelligent Computing and Cybernetics》 EI 2018年第3期386-403,共18页
Purpose–Many strategies have been put forward for training deep network models,however,stacking of several layers of non-linearities typically results in poor propagation of gradients and activations.The purpose of t... Purpose–Many strategies have been put forward for training deep network models,however,stacking of several layers of non-linearities typically results in poor propagation of gradients and activations.The purpose of this paper is to explore the use of two steps strategy where initial deep learning model is obtained first by unsupervised learning and then optimizing the initial deep learning model by fine tuning.A number of fine tuning algorithms are explored in this work for optimizing deep learning models.This includes proposing a new algorithm where Backpropagation with adaptive gain algorithm is integrated with Dropout technique and the authors evaluate its performance in the fine tuning of the pretrained deep network.Design/methodology/approach–The parameters of deep neural networks are first learnt using greedy layer-wise unsupervised pretraining.The proposed technique is then used to perform supervised fine tuning of the deep neural network model.Extensive experimental study is performed to evaluate the performance of the proposed fine tuning technique on three benchmark data sets:USPS,Gisette and MNIST.The authors have tested the approach on varying size data sets which include randomly chosen training samples of size 20,50,70 and 100 percent from the original data set.Findings–Through extensive experimental study,it is concluded that the two steps strategy and the proposed fine tuning technique significantly yield promising results in optimization of deep network models.Originality/value–This paper proposes employing several algorithms for fine tuning of deep network model.A new approach that integrates adaptive gain Backpropagation(BP)algorithm with Dropout technique is proposed for fine tuning of deep networks.Evaluation and comparison of various algorithms proposed for fine tuning on three benchmark data sets is presented in the paper. 展开更多
关键词 DROPOUT Deep neural network contrastive divergence Fine tuning of deep neural network Restricted Boltzmann machine Unsupervised pretraining Backpropagation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部