It is shown in this paper that if the hidden layer units take a sinusoidalactivation function,the optimum weights of the three-layer feedforward neural networkcan be explicitly solved by relating the layered neural ne...It is shown in this paper that if the hidden layer units take a sinusoidalactivation function,the optimum weights of the three-layer feedforward neural networkcan be explicitly solved by relating the layered neural network with a truncated Fourier se-ries expansion.Based on this result,two approaches are presented of which one is suited tothe case that the detailed statistical information is available or can be easily estimated.An-other is of data-adaptive type,which can be treated as a solution of standardleast-squares.The later is best suited to realtime processing and slowly time-varying ap-plications since it can be straightforwardly implemented by the traditional LMS or RLSadaptive algorithms.It is also shown that for both the approaches,the resulting networksown an ability of forming arbitrary mappings.By using the present approaches,theconventional training procedure,which is usually very time-consuming,can be avoided.展开更多
提出一种基于遗传算法和低阶广义记忆多项式实值神经网络的射频功率放大器数字预失真方法。该方法将遗传算法优化的低阶广义记忆多项式模型与神经网络模型进行级联来增强校正模型与功放失真的匹配程度。它不仅可以提升模型的校正能力,...提出一种基于遗传算法和低阶广义记忆多项式实值神经网络的射频功率放大器数字预失真方法。该方法将遗传算法优化的低阶广义记忆多项式模型与神经网络模型进行级联来增强校正模型与功放失真的匹配程度。它不仅可以提升模型的校正能力,同时可以加快网络的收敛速度。采用60MHz的三载波LTE信号进行实验,通过与实值延时线神经网络模型对比,在收敛速度上有显著提升,同时在邻道功率泄露ACLR指标上有6 d B左右改善。展开更多
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl...Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.展开更多
基金This work was supported by grant 69102007 from the NSF of China the Ph.D Research Foundation of State Educational Commission of China.
文摘It is shown in this paper that if the hidden layer units take a sinusoidalactivation function,the optimum weights of the three-layer feedforward neural networkcan be explicitly solved by relating the layered neural network with a truncated Fourier se-ries expansion.Based on this result,two approaches are presented of which one is suited tothe case that the detailed statistical information is available or can be easily estimated.An-other is of data-adaptive type,which can be treated as a solution of standardleast-squares.The later is best suited to realtime processing and slowly time-varying ap-plications since it can be straightforwardly implemented by the traditional LMS or RLSadaptive algorithms.It is also shown that for both the approaches,the resulting networksown an ability of forming arbitrary mappings.By using the present approaches,theconventional training procedure,which is usually very time-consuming,can be avoided.
文摘提出一种基于遗传算法和低阶广义记忆多项式实值神经网络的射频功率放大器数字预失真方法。该方法将遗传算法优化的低阶广义记忆多项式模型与神经网络模型进行级联来增强校正模型与功放失真的匹配程度。它不仅可以提升模型的校正能力,同时可以加快网络的收敛速度。采用60MHz的三载波LTE信号进行实验,通过与实值延时线神经网络模型对比,在收敛速度上有显著提升,同时在邻道功率泄露ACLR指标上有6 d B左右改善。
文摘Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.