The prediction of slope stability is considered as one of the critical concerns in geotechnical engineering.Conventional stochastic analysis with spatially variable slopes is time-consuming and highly computation-dema...The prediction of slope stability is considered as one of the critical concerns in geotechnical engineering.Conventional stochastic analysis with spatially variable slopes is time-consuming and highly computation-demanding.To assess the slope stability problems with a more desirable computational effort,many machine learning(ML)algorithms have been proposed.However,most ML-based techniques require that the training data must be in the same feature space and have the same distribution,and the model may need to be rebuilt when the spatial distribution changes.This paper presents a new ML-based algorithm,which combines the principal component analysis(PCA)-based neural network(NN)and transfer learning(TL)techniques(i.e.PCAeNNeTL)to conduct the stability analysis of slopes with different spatial distributions.The Monte Carlo coupled with finite element simulation is first conducted for data acquisition considering the spatial variability of cohesive strength or friction angle of soils from eight slopes with the same geometry.The PCA method is incorporated into the neural network algorithm(i.e.PCA-NN)to increase the computational efficiency by reducing the input variables.It is found that the PCA-NN algorithm performs well in improving the prediction of slope stability for a given slope in terms of the computational accuracy and computational effort when compared with the other two algorithms(i.e.NN and decision trees,DT).Furthermore,the PCAeNNeTL algorithm shows great potential in assessing the stability of slope even with fewer training data.展开更多
We present our results by using a machine learning(ML)approach for the solution of the Riemann problem for the Euler equations of fluid dynamics.The Riemann problem is an initial-value problem with piecewise-constant ...We present our results by using a machine learning(ML)approach for the solution of the Riemann problem for the Euler equations of fluid dynamics.The Riemann problem is an initial-value problem with piecewise-constant initial data and it represents a mathematical model of the shock tube.The solution of the Riemann problem is the building block for many numerical algorithms in computational fluid dynamics,such as finite-volume or discontinuous Galerkin methods.Therefore,a fast and accurate approximation of the solution of the Riemann problem and construction of the associated numerical fluxes is of crucial importance.The exact solution of the shock tube problem is fully described by the intermediate pressure and mathematically reduces to finding a solution of a nonlinear equation.Prior to delving into the complexities of ML for the Riemann problem,we consider a much simpler formulation,yet very informative,problem of learning roots of quadratic equations based on their coefficients.We compare two approaches:(i)Gaussian process(GP)regressions,and(ii)neural network(NN)approximations.Among these approaches,NNs prove to be more robust and efficient,although GP can be appreciably more accurate(about 30\%).We then use our experience with the quadratic equation to apply the GP and NN approaches to learn the exact solution of the Riemann problem from the initial data or coefficients of the gas equation of state(EOS).We compare GP and NN approximations in both regression and classification analysis and discuss the potential benefits and drawbacks of the ML approach.展开更多
Recent years,neural networks(NNs)have received increasing attention from both academia and industry.So far significant diversity among existing NNs as well as their hardware platforms makes NN programming a daunting t...Recent years,neural networks(NNs)have received increasing attention from both academia and industry.So far significant diversity among existing NNs as well as their hardware platforms makes NN programming a daunting task.In this paper,a domain-specific language(DSL)for NNs,neural network language(NNL)is proposed to deliver productivity of NN programming and portable performance of NN execution on different hardware platforms.The productivity and flexibility of NN programming are enabled by abstracting NNs as a directed graph of blocks.The language describes 4 representative and widely used NNs and runs them on 3 different hardware platforms(CPU,GPU and NN accelerator).Experimental results show that NNs written with the proposed language are,on average,14.5%better than the baseline implementations across these 3 platforms.Moreover,compared with the Caffe framework that specifically targets the GPU platform,the code can achieve similar performance.展开更多
入侵检测系统能够有效地检测网络中异常的攻击行为,对网络安全至关重要.目前,许多入侵检测方法对攻击行为Probe(probing),U2R(user to root),R2L(remote to local)的检测率比较低.基于这一问题,提出一种新的混合多层次入侵检测模型,检...入侵检测系统能够有效地检测网络中异常的攻击行为,对网络安全至关重要.目前,许多入侵检测方法对攻击行为Probe(probing),U2R(user to root),R2L(remote to local)的检测率比较低.基于这一问题,提出一种新的混合多层次入侵检测模型,检测正常和异常的网络行为.该模型首先应用KNN(K nearest neighbors)离群点检测算法来检测并删除离群数据,从而得到一个小规模和高质量的训练数据集;接下来,结合网络流量的相似性,提出一种类别检测划分方法,该方法避免了异常行为在检测过程中的相互干扰,尤其是对小流量攻击行为的检测;结合这种划分方法,构建多层次的随机森林模型来检测网络异常行为,提高了网络攻击行为的检测效果.流行的数据集KDD(knowledge discovery and data mining) Cup 1999被用来评估所提出的模型.通过与其他算法进行对比,该方法的准确率和检测率要明显优于其他算法,并且能有效地检测Probe,U2R,R2L这3种攻击类型.展开更多
针对一类具有不确定系统函数和方向未知的不确定增益函数的非线性系统,提出了一种鲁棒自适应神经网络控制算法.本算法采用RBF神经网络(Radial based function neural network,RBFNN)逼近模型不确定性,外界干扰和建模误差采用非线性阻尼...针对一类具有不确定系统函数和方向未知的不确定增益函数的非线性系统,提出了一种鲁棒自适应神经网络控制算法.本算法采用RBF神经网络(Radial based function neural network,RBFNN)逼近模型不确定性,外界干扰和建模误差采用非线性阻尼项进行补偿,将动态面控制(Dynamic surface control,DSC)与后推方法结合,消除了反推法的计算膨胀问题,降低了控制器的复杂性;尤其是采用Nussbaum函数处理系统中方向未知的不确定虚拟控制增益函数,不仅可以避免可能存在的控制器奇异值问题,而且还能使得整个系统的在线学习参数显著减少,与DSC方法优点结合,使得控制算法的计算量大为减少,便于计算机实现.稳定性分析证明了所得闭环系统是半全局一致最终有界(Semi-global uniformly ultimately bounded,SGUUB)的,并且跟踪误差可以收敛到原点的一个较小邻域.最后,计算机仿真结果表明了本文所提出控制器的有效性.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.52008402)the Central South University autonomous exploration project(Grant No.2021zzts0790).
文摘The prediction of slope stability is considered as one of the critical concerns in geotechnical engineering.Conventional stochastic analysis with spatially variable slopes is time-consuming and highly computation-demanding.To assess the slope stability problems with a more desirable computational effort,many machine learning(ML)algorithms have been proposed.However,most ML-based techniques require that the training data must be in the same feature space and have the same distribution,and the model may need to be rebuilt when the spatial distribution changes.This paper presents a new ML-based algorithm,which combines the principal component analysis(PCA)-based neural network(NN)and transfer learning(TL)techniques(i.e.PCAeNNeTL)to conduct the stability analysis of slopes with different spatial distributions.The Monte Carlo coupled with finite element simulation is first conducted for data acquisition considering the spatial variability of cohesive strength or friction angle of soils from eight slopes with the same geometry.The PCA method is incorporated into the neural network algorithm(i.e.PCA-NN)to increase the computational efficiency by reducing the input variables.It is found that the PCA-NN algorithm performs well in improving the prediction of slope stability for a given slope in terms of the computational accuracy and computational effort when compared with the other two algorithms(i.e.NN and decision trees,DT).Furthermore,the PCAeNNeTL algorithm shows great potential in assessing the stability of slope even with fewer training data.
基金This work was performed under the auspices of the National Nuclear Security Administration of the US Department of Energy at Los Alamos National Laboratory under Contract No.DE-AC52-06NA25396The authors gratefully acknowledge the support of the US Department of Energy National Nuclear Security Administration Advanced Simulation and Computing Program.The Los Alamos unlimited release number is LA-UR-19-32257.
文摘We present our results by using a machine learning(ML)approach for the solution of the Riemann problem for the Euler equations of fluid dynamics.The Riemann problem is an initial-value problem with piecewise-constant initial data and it represents a mathematical model of the shock tube.The solution of the Riemann problem is the building block for many numerical algorithms in computational fluid dynamics,such as finite-volume or discontinuous Galerkin methods.Therefore,a fast and accurate approximation of the solution of the Riemann problem and construction of the associated numerical fluxes is of crucial importance.The exact solution of the shock tube problem is fully described by the intermediate pressure and mathematically reduces to finding a solution of a nonlinear equation.Prior to delving into the complexities of ML for the Riemann problem,we consider a much simpler formulation,yet very informative,problem of learning roots of quadratic equations based on their coefficients.We compare two approaches:(i)Gaussian process(GP)regressions,and(ii)neural network(NN)approximations.Among these approaches,NNs prove to be more robust and efficient,although GP can be appreciably more accurate(about 30\%).We then use our experience with the quadratic equation to apply the GP and NN approaches to learn the exact solution of the Riemann problem from the initial data or coefficients of the gas equation of state(EOS).We compare GP and NN approximations in both regression and classification analysis and discuss the potential benefits and drawbacks of the ML approach.
基金the National Key Research and Development Program of China(No.2017YFA0700902,2017YFB1003101)the National Natural Science Foundation of China(No.61472396,61432016,61473275,61522211,61532016,61521092,61502446,61672491,61602441,61602446,61732002,61702478)+3 种基金the 973 Program of China(No.2015CB358800)National Science and Technology Major Project(No.2018ZX01031102)the Transformation and Transfer of Scientific and Technological Achievements of Chinese Academy of Sciences(No.KFJ-HGZX-013)Strategic Priority Research Program of Chinese Academy of Sciences(No.XDBS01050200).
文摘Recent years,neural networks(NNs)have received increasing attention from both academia and industry.So far significant diversity among existing NNs as well as their hardware platforms makes NN programming a daunting task.In this paper,a domain-specific language(DSL)for NNs,neural network language(NNL)is proposed to deliver productivity of NN programming and portable performance of NN execution on different hardware platforms.The productivity and flexibility of NN programming are enabled by abstracting NNs as a directed graph of blocks.The language describes 4 representative and widely used NNs and runs them on 3 different hardware platforms(CPU,GPU and NN accelerator).Experimental results show that NNs written with the proposed language are,on average,14.5%better than the baseline implementations across these 3 platforms.Moreover,compared with the Caffe framework that specifically targets the GPU platform,the code can achieve similar performance.
文摘入侵检测系统能够有效地检测网络中异常的攻击行为,对网络安全至关重要.目前,许多入侵检测方法对攻击行为Probe(probing),U2R(user to root),R2L(remote to local)的检测率比较低.基于这一问题,提出一种新的混合多层次入侵检测模型,检测正常和异常的网络行为.该模型首先应用KNN(K nearest neighbors)离群点检测算法来检测并删除离群数据,从而得到一个小规模和高质量的训练数据集;接下来,结合网络流量的相似性,提出一种类别检测划分方法,该方法避免了异常行为在检测过程中的相互干扰,尤其是对小流量攻击行为的检测;结合这种划分方法,构建多层次的随机森林模型来检测网络异常行为,提高了网络攻击行为的检测效果.流行的数据集KDD(knowledge discovery and data mining) Cup 1999被用来评估所提出的模型.通过与其他算法进行对比,该方法的准确率和检测率要明显优于其他算法,并且能有效地检测Probe,U2R,R2L这3种攻击类型.
基金Supported by the National Natural Science Foundation of China under Grant Nos.60503036,60473073(国家自然科学基金)the Fok Ying Tong Education Foundation of China under Grant No.104027(霍英东教育基金)the National Grand Fundamental Research973Program of China under Grant No.2006CB303000(国家重点基础研究发展规划(973))
文摘针对一类具有不确定系统函数和方向未知的不确定增益函数的非线性系统,提出了一种鲁棒自适应神经网络控制算法.本算法采用RBF神经网络(Radial based function neural network,RBFNN)逼近模型不确定性,外界干扰和建模误差采用非线性阻尼项进行补偿,将动态面控制(Dynamic surface control,DSC)与后推方法结合,消除了反推法的计算膨胀问题,降低了控制器的复杂性;尤其是采用Nussbaum函数处理系统中方向未知的不确定虚拟控制增益函数,不仅可以避免可能存在的控制器奇异值问题,而且还能使得整个系统的在线学习参数显著减少,与DSC方法优点结合,使得控制算法的计算量大为减少,便于计算机实现.稳定性分析证明了所得闭环系统是半全局一致最终有界(Semi-global uniformly ultimately bounded,SGUUB)的,并且跟踪误差可以收敛到原点的一个较小邻域.最后,计算机仿真结果表明了本文所提出控制器的有效性.