A novel approach to survivor memory unit of Decision Feedback Sequence Estimator(DFSE) for 1000BASE-T transceiver based on hybrid architecture of the classical register-exchange and trace-back methods is proposed.The ...A novel approach to survivor memory unit of Decision Feedback Sequence Estimator(DFSE) for 1000BASE-T transceiver based on hybrid architecture of the classical register-exchange and trace-back methods is proposed.The proposed architecture is investigated with special emphasis on low power and small decoder latency,in which a dedicated register-exchange module is designed to provide tentative survivor symbols with zero latency,and a high-speed trace back logic is presented to meet the tight latency budget specified for 1000BASE-T transceiver.Furthermore,clock-gating register banks are constructed for power saving.VLSI implementation reveals that,the proposed architecture provides about 40% savings in power consumption compared to the traditional register-exchange architecture.展开更多
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl...Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.展开更多
文摘A novel approach to survivor memory unit of Decision Feedback Sequence Estimator(DFSE) for 1000BASE-T transceiver based on hybrid architecture of the classical register-exchange and trace-back methods is proposed.The proposed architecture is investigated with special emphasis on low power and small decoder latency,in which a dedicated register-exchange module is designed to provide tentative survivor symbols with zero latency,and a high-speed trace back logic is presented to meet the tight latency budget specified for 1000BASE-T transceiver.Furthermore,clock-gating register banks are constructed for power saving.VLSI implementation reveals that,the proposed architecture provides about 40% savings in power consumption compared to the traditional register-exchange architecture.
文摘Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.