期刊文献+

基于M-estimator函数的加权深度随机配置网络

Weighted Deep Stochastic Configuration Networks Based on M-estimator Functions
下载PDF
导出
摘要 深度随机配置网络(Deep Stochastic Configuration Network,DSCN)是一种增量式随机化学习模型,具有人为干预程度低、学习效率高和泛化能力强等优点.但是,面向噪声数据回归与分析时,传统的DSCN易受到异常值影响,从而降低了模型的泛化性.因此,为提高噪声数据回归的精度和鲁棒性,提出了基于M-estimator函数的加权深度随机配置网络(Weighted Deep Stochastic Configuration Networks,WDSCN).首先,选取Huber和Bisquare 2个常用的M-estimator函数计算样本权重,利用加权最小二乘法和L2正则化策略替代最小二乘来更新WDSCN输出权重,以降低异常值对WDSCN的负面影响;其次,为提高WDSCN模型表征能力,设计了一种随机配置稀疏自编码器(Stochastic Configuration Sparse Autoencoder,SC-SAE),SC-SAE基于DSCN其独有的监督机制随机分配输入参数,采用基于L1正则化的目标函数,并利用交替方向乘子法(Alternating Direction Method of Multipliers,ADMM)计算SC-SAE输出权重;然后,为获取有效的特征表示,利用SC-SAE生成特征的随机性和多样性,采用多个SC-SAE进行特征学习并融合,用于WDSCN模型训练;最后,在真实数据集上的实验结果表明,WDSCN-Huber、WDSCN-Bisquare相比于DSCN、SCN以及RSC-KDE、RSC-Huber、RSC-IQR、RSCN-KDE、WBLS-KDE和RBLS-Huber等加权模型具有更高的泛化性能和回归精度. Deep stochastic configuration network(DSCN)is an randomized incremental learning model,it can start from a small structure,increase the nodes and hidden layers gradually.As the input weights and biases of nodes are assigned according to supervisory mechanism,meantime,all the nodes in hidden layer are fully connected to the outputs,the output weights of DSCN are determined through the least square method.Therefore,DSCN has the advantages of less manual intervention,high learning efficiency,strong generalization ability.However,although the randomized feedforward learning process of DSCN has faster efficiency,the feature learning ability is still insufficient.In the meantime,with the increase of nodes and hidden layers,it is easy to lead to overfitting phenomenon.When solving regression problems with noise,the performance of original DSCN is easily affected by outliers,which reduces the generalization ability of the model.Therefore,to improve the regression performance and robustness of DSCN,weighted deep sto-chastic configuration networks(WDSCN)based on M-Estimator functions are proposed.First of all,we adopt two common M-estimator functions(i.e.,Huber and Bisquare)to acquire the sample weights for re-ducing the negative impact of outliers.When the sample has a smaller training error,give this sample a lar-ger weight,while when the training error of sample is larger,it is determined to be outlier data and give this sample a smaller weight.The sample weight decreases monotonically with the increase of the absolute value of the error,thus reducing the influence of noisy data onto the model and improving the generaliza-tion of the algorithm.Meanwhile,the weighted least square method and L2 regularization strategy are in-troduced to calculate output weight vector replace the least square method.It can not only solve the noisy data regression problems and avoid over-fitting problem of DSCN.In the second place,the model based on L1 regularization is helpful to extract sparse features and improve the accuracy of supervised learning,for further improve the representation ability of WDSCN,a stochastic configuration sparse autoencoder(SC-SAE)is designed,SC-SAE use the supervision mechanism of DSCN to assign input parameters,at the same time,we adopt the L1 regularization technique to objective function for getting sparse features,alter-nating direction method of multipliers(ADMM)approach is utilized to solve the objective function for de-termining the output weights of SC-SAE.And then,as the randomness encoding process of SC-SAE,we can obtain the diversity of features of different SC-SAE models,consequently effective feature representa-tion can be acquired through fusion features from multiple SC-SAE for the training of WDSCN.Finally,experimental results on real-world datasets show that the proposed WDSCN-Huber and WDSCN-Bisquare have higher generalization performances and regression accuracies than DSCN,SCN,and other weighted models(e.g.,RSC-KDE,RSC-Huber,RSC-IQR,RDSCN-KDE,WBLS-KDE and RBLS-Huber).But in the meantime,the results of ablation experiment show that WDSCN with fusion sparse features which exacted from multiple different SC-SAE models are superior to those models with fusion sparse feature.Therefore,it is verified that SC-SAE can extract effective sparse features and improve the learning ability of weighted models.
作者 丁世飞 张成龙 郭丽丽 张健 丁玲 DING Shi-Fei;ZHANG Cheng-Long;GUO Li-Li;ZHANG Jian;DING Ling(School of Computer Science and Technology,China University of Mining and Technology,Xuzhou,Jiangsu 221116;Mine Digitization Engineering Research Center of Ministry of Education(China University of Mining and Technology),Xuzhou,Jiangsu 221116;College of Intelligence and Computing,Tianjin University,Tianjin 300350)
出处 《计算机学报》 EI CAS CSCD 北大核心 2023年第11期2476-2487,共12页 Chinese Journal of Computers
基金 国家自然科学基金(62276265,61976216,62206297,61672522)资助。
关键词 深度随机配置网络 异常数据 鲁棒性 回归 随机神经网络 deep stochastic configuration network noisy data robustness regression random neural network
  • 相关文献

参考文献3

二级参考文献9

共引文献27

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部