期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Exponential Stability of Impulsive Stochastic Recurrent Neural Networks with Time-Varying Delays and Markovian Jumping
1
作者 XU Congcong 《Wuhan University Journal of Natural Sciences》 CAS 2014年第1期71-78,共8页
In this paper, we consider a class of impulsive stochas- tic recurrent neural networks with time-varying delays and Markovian jumping. Based on some impulsive delay differential inequalities, some easy-to-test conditi... In this paper, we consider a class of impulsive stochas- tic recurrent neural networks with time-varying delays and Markovian jumping. Based on some impulsive delay differential inequalities, some easy-to-test conditions such that the dynamics of the neural network is stochastically exponentially stable in the mean square, independent of the time delay, are obtained. An example is also given to illustrate the effectiveness of our results. 展开更多
关键词 exponential stability stochastic recurrent neural network Markovian jumping IMPULSIVE time-varying delays
原文传递
Delay-Dependent Exponential Stability of Stochastic Delayed Recurrent Neural Networks with Markovian Switching
2
作者 刘海峰 王春华 魏国亮 《Journal of Donghua University(English Edition)》 EI CAS 2008年第2期195-199,共5页
The exponential stability problem is investigated for a class of stochastic recurrent neural networks with time delay and Markovian switching. By using Ito's differential formula and the Lyapunov stability theory, su... The exponential stability problem is investigated for a class of stochastic recurrent neural networks with time delay and Markovian switching. By using Ito's differential formula and the Lyapunov stability theory, sufficient condition for the solvability of this problem is derived in term of linear matrix inequalities, which can be easily checked by resorting to available software packages. A numerical example and the simulation are exploited to demonstrate the effectiveness of the proposed results. 展开更多
关键词 exponential stability stochastic recurrent neural network linear matrix inequality time delay Markovian switching
下载PDF
Deep Learning Applied to Computational Mechanics:A Comprehensive Review,State of the Art,and the Classics 被引量:1
3
作者 Loc Vu-Quoc Alexander Humer 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第11期1069-1343,共275页
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl... Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example. 展开更多
关键词 Deep learning breakthroughs network architectures backpropagation stochastic optimization methods from classic to modern recurrent neural networks long short-term memory gated recurrent unit attention transformer kernel machines Gaussian processes libraries Physics-Informed neural networks state-of-the-art history limitations challenges Applications to computational mechanics Finite-element matrix integration improved Gauss quadrature Multiscale geomechanics fluid-filled porous media Fluid mechanics turbulence proper orthogonal decomposition Nonlinear-manifold model-order reduction autoencoder hyper-reduction using gappy data control of large deformable beam
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部