期刊文献+
共找到115篇文章
< 1 2 6 >
每页显示 20 50 100
Data-Driven Learning Control Algorithms for Unachievable Tracking Problems
1
作者 Zeyi Zhang Hao Jiang +1 位作者 Dong Shen Samer S.Saab 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期205-218,共14页
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in... For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings. 展开更多
关键词 data-driven algorithms incomplete information iterative learning control gradient information unachievable problems
下载PDF
Noise-Tolerant ZNN-Based Data-Driven Iterative Learning Control for Discrete Nonaffine Nonlinear MIMO Repetitive Systems
2
作者 Yunfeng Hu Chong Zhang +4 位作者 Bo Wang Jing Zhao Xun Gong Jinwu Gao Hong Chen 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期344-361,共18页
Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning ... Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process. 展开更多
关键词 Adaptive control control system synthesis data-driven iterative learning control neurocontroller nonlinear discrete time systems
下载PDF
Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications 被引量:2
3
作者 Ding Wang Ning Gao +2 位作者 Derong Liu Jinna Li Frank L.Lewis 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期18-36,共19页
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ... Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence. 展开更多
关键词 Adaptive dynamic programming(ADP) advanced control complex environment data-driven control event-triggered design intelligent control neural networks nonlinear systems optimal control reinforcement learning(RL)
下载PDF
Data-Driven Deep Learning for OTFS Detection 被引量:4
4
作者 Yi Gong Qingyu Li +2 位作者 Fanke Meng Xinru Li Zhan Xu 《China Communications》 SCIE CSCD 2023年第1期88-101,共14页
Recently,orthogonal time frequency space(OTFS)was presented to alleviate severe Doppler effects in high mobility scenarios.Most of the current OTFS detection schemes rely on perfect channel state information(CSI).Howe... Recently,orthogonal time frequency space(OTFS)was presented to alleviate severe Doppler effects in high mobility scenarios.Most of the current OTFS detection schemes rely on perfect channel state information(CSI).However,in real-life systems,the parameters of channels will constantly change,which are often difficult to capture and describe.In this paper,we summarize the existing research on OTFS detection based on data-driven deep learning(DL)and propose three new network structures.The presented three networks include a residual network(ResNet),a dense network(DenseNet),and a residual dense network(RDN)for OTFS detection.The detection schemes based on data-driven paradigms do not require a model that is easy to handle mathematically.Meanwhile,compared with the existing fully connected-deep neural network(FC-DNN)and standard convolutional neural network(CNN),these three new networks can alleviate the problems of gradient explosion and gradient disappearance.Through simulation,it is proved that RDN has the best performance among the three proposed schemes due to the combination of shallow and deep features.RDN can solve the issue of performance loss caused by the traditional network not fully utilizing all the hierarchical information. 展开更多
关键词 data-driven deep learning OTFS DETECTION
下载PDF
Machine learning for membrane design and discovery
5
作者 Haoyu Yin Muzi Xu +4 位作者 Zhiyao Luo Xiaotian Bi Jiali Li Sui Zhang Xiaonan Wang 《Green Energy & Environment》 SCIE EI CAS CSCD 2024年第1期54-70,共17页
Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research an... Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research and development norm of new materials for energy and environment.This review provides an overview and perspectives on ML methodologies and their applications in membrane design and dis-covery.A brief overview of membrane technologies isfirst provided with the current bottlenecks and potential solutions.Through an appli-cations-based perspective of AI-aided membrane design and discovery,we further show how ML strategies are applied to the membrane discovery cycle(including membrane material design,membrane application,membrane process design,and knowledge extraction),in various membrane systems,ranging from gas,liquid,and fuel cell separation membranes.Furthermore,the best practices of integrating ML methods and specific application targets in membrane design and discovery are presented with an ideal paradigm proposed.The challenges to be addressed and prospects of AI applications in membrane discovery are also highlighted in the end. 展开更多
关键词 Machine learning Membranes AI for Membrane data-driven DESIGN
下载PDF
Production Capacity Prediction Method of Shale Oil Based on Machine Learning Combination Model
6
作者 Qin Qian Mingjing Lu +3 位作者 Anhai Zhong Feng Yang Wenjun He Min Li 《Energy Engineering》 EI 2024年第8期2167-2190,共24页
The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinea... The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets. 展开更多
关键词 Shale oil production capacity data-driven model model-driven method machine learning
下载PDF
Hybrid data-driven framework for shale gas production performance analysis via game theory, machine learning, and optimization approaches 被引量:1
7
作者 Jin Meng Yu-Jie Zhou +4 位作者 Tian-Rui Ye Yi-Tian Xiao Ya-Qiu Lu Ai-Wei Zheng Bang Liang 《Petroleum Science》 SCIE EI CAS CSCD 2023年第1期277-294,共18页
A comprehensive and precise analysis of shale gas production performance is crucial for evaluating resource potential,designing a field development plan,and making investment decisions.However,quantitative analysis ca... A comprehensive and precise analysis of shale gas production performance is crucial for evaluating resource potential,designing a field development plan,and making investment decisions.However,quantitative analysis can be challenging because production performance is dominated by the complex interaction among a series of geological and engineering factors.In fact,each factor can be viewed as a player who makes cooperative contributions to the production payoff within the constraints of physical laws and models.Inspired by the idea,we propose a hybrid data-driven analysis framework in this study,where the contributions of dominant factors are quantitatively evaluated,the productions are precisely forecasted,and the development optimization suggestions are comprehensively generated.More specifically,game theory and machine learning models are coupled to determine the dominating geological and engineering factors.The Shapley value with definite physical meaning is employed to quantitatively measure the effects of individual factors.A multi-model-fused stacked model is trained for production forecast,which provides the basis for derivative-free optimization algorithms to optimize the development plan.The complete workflow is validated with actual production data collected from the Fuling shale gas field,Sichuan Basin,China.The validation results show that the proposed procedure can draw rigorous conclusions with quantified evidence and thereby provide specific and reliable suggestions for development plan optimization.Comparing with traditional and experience-based approaches,the hybrid data-driven procedure is advanced in terms of both efficiency and accuracy. 展开更多
关键词 Shale gas Production performance data-driven Dominant factors Game theory Machine learning Derivative-free optimization
下载PDF
On the Effectiveness of Data-Driven Learning(DDL) of English Articles
8
作者 赵娟 《海外英语》 2013年第18期6-7,共2页
The effectiveness of data-driven learning(DDL) has been testified on Chinese learners by using sample corpora of English articles. The result shows that an independent manipulation of the corpora on the part of learne... The effectiveness of data-driven learning(DDL) has been testified on Chinese learners by using sample corpora of English articles. The result shows that an independent manipulation of the corpora on the part of learner can not ensure the suc cess of DDL. 展开更多
关键词 ddl GRAMMAR learning learner-centeredness CORPUS E
下载PDF
A Case for Reevaluating Teacher's Role in Data-Driven Learning (DDL) of English Articles
9
作者 赵娟 《海外英语》 2013年第19期31-32,共2页
A case study has been made to explore whether the teacher’s role in data-driven learning(DDL)can be minimized.The outcome shows that the teacher’s role in offering an explicit instruction may be indispensable and ev... A case study has been made to explore whether the teacher’s role in data-driven learning(DDL)can be minimized.The outcome shows that the teacher’s role in offering an explicit instruction may be indispensable and even central to the acquisi tion of English articles. 展开更多
关键词 ddl GRAMMAR learning teacher’s role learner-center
下载PDF
Advances in machine learning-and artificial intelligence-assisted material design of steels 被引量:5
10
作者 Guangfei Pan Feiyang Wang +7 位作者 Chunlei Shang Honghui Wu Guilin Wu Junheng Gao Shuize Wang Zhijun Gao Xiaoye Zhou Xinping Mao 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2023年第6期1003-1024,共22页
With the rapid development of artificial intelligence technology and increasing material data,machine learning-and artificial intelligence-assisted design of high-performance steel materials is becoming a mainstream p... With the rapid development of artificial intelligence technology and increasing material data,machine learning-and artificial intelligence-assisted design of high-performance steel materials is becoming a mainstream paradigm in materials science.Machine learning methods,based on an interdisciplinary discipline between computer science,statistics and material science,are good at discovering correlations between numerous data points.Compared with the traditional physical modeling method in material science,the main advantage of machine learning is that it overcomes the complex physical mechanisms of the material itself and provides a new perspective for the research and development of novel materials.This review starts with data preprocessing and the introduction of different machine learning models,including algorithm selection and model evaluation.Then,some successful cases of applying machine learning methods in the field of steel research are reviewed based on the main theme of optimizing composition,structure,processing,and performance.The application of machine learning methods to the performance-oriented inverse design of material composition and detection of steel defects is also reviewed.Finally,the applicability and limitations of machine learning in the material field are summarized,and future directions and prospects are discussed. 展开更多
关键词 machine learning data-driven design new research paradigm high-performance steel
下载PDF
Reinforcement learning-based scheduling of multi-battery energy storage system 被引量:1
11
作者 CHENG Guangran DONG Lu +1 位作者 YUAN Xin SUN Changyin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第1期117-128,共12页
In this paper, a reinforcement learning-based multibattery energy storage system(MBESS) scheduling policy is proposed to minimize the consumers ’ electricity cost. The MBESS scheduling problem is modeled as a Markov ... In this paper, a reinforcement learning-based multibattery energy storage system(MBESS) scheduling policy is proposed to minimize the consumers ’ electricity cost. The MBESS scheduling problem is modeled as a Markov decision process(MDP) with unknown transition probability. However, the optimal value function is time-dependent and difficult to obtain because of the periodicity of the electricity price and residential load. Therefore, a series of time-independent action-value functions are proposed to describe every period of a day. To approximate every action-value function, a corresponding critic network is established, which is cascaded with other critic networks according to the time sequence. Then, the continuous management strategy is obtained from the related action network. Moreover, a two-stage learning protocol including offline and online learning stages is provided for detailed implementation in real-time battery management. Numerical experimental examples are given to demonstrate the effectiveness of the developed algorithm. 展开更多
关键词 multi-battery energy storage system(MBESS) reinforcement learning periodic value iteration data-driven
下载PDF
Machine Learning for 5G and Beyond:From ModelBased to Data-Driven Mobile Wireless Networks 被引量:11
12
作者 Tianyu Wang Shaowei Wang Zhi-Hua Zhou 《China Communications》 SCIE CSCD 2019年第1期165-175,共11页
During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place i... During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes. 展开更多
关键词 mobile wireless networks data-driven PARADIGM MACHINE learning
下载PDF
Machine Learning and Data-Driven Techniques for the Control of Smart Power Generation Systems:An Uncertainty Handling Perspective 被引量:7
13
作者 Li Sun Fengqi You 《Engineering》 SCIE EI 2021年第9期1239-1247,共9页
Due to growing concerns regarding climate change and environmental protection,smart power generation has become essential for the economical and safe operation of both conventional thermal power plants and sustainable... Due to growing concerns regarding climate change and environmental protection,smart power generation has become essential for the economical and safe operation of both conventional thermal power plants and sustainable energy.Traditional first-principle model-based methods are becoming insufficient when faced with the ever-growing system scale and its various uncertainties.The burgeoning era of machine learning(ML)and data-driven control(DDC)techniques promises an improved alternative to these outdated methods.This paper reviews typical applications of ML and DDC at the level of monitoring,control,optimization,and fault detection of power generation systems,with a particular focus on uncovering how these methods can function in evaluating,counteracting,or withstanding the effects of the associated uncertainties.A holistic view is provided on the control techniques of smart power generation,from the regulation level to the planning level.The benefits of ML and DDC techniques are accordingly interpreted in terms of visibility,maneuverability,flexibility,profitability,and safety(abbreviated as the“5-TYs”),respectively.Finally,an outlook on future research and applications is presented. 展开更多
关键词 Smart power generation Machine learning data-driven control Systems engineering
下载PDF
Data-Driven Human-Robot Interaction Without Velocity Measurement Using Off-Policy Reinforcement Learning 被引量:3
14
作者 Yongliang Yang Zihao Ding +2 位作者 Rui Wang Hamidreza Modares Donald C.Wunsch 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第1期47-63,共17页
In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design i... In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design.The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop,while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop.Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters.In the inner-loop,a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement.On this basis,an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space.The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework. 展开更多
关键词 Adaptive impedance control data-driven method human-robot interaction(HRI) reinforcement learning velocity-free
下载PDF
Deep learning-based battery state of charge estimation:Enhancing estimation performance with unlabelled training samples
15
作者 Liang Ma Tieling Zhang 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2023年第5期48-57,I0002,共11页
The estimation of state of charge(SOC)using deep neural networks(DNN)generally requires a considerable number of labelled samples for training,which refer to the current and voltage pieces with knowing their correspon... The estimation of state of charge(SOC)using deep neural networks(DNN)generally requires a considerable number of labelled samples for training,which refer to the current and voltage pieces with knowing their corresponding SOCs.However,the collection of labelled samples is costly and time-consuming.In contrast,the unlabelled training samples,which consist of the current and voltage data with unknown SOCs,are easy to obtain.In view of this,this paper proposes an improved DNN for SOC estimation by effectively using both a pool of unlabelled samples and a limited number of labelled samples.Besides the traditional supervised network,the proposed method uses an input reconstruction network to reformulate the time dependency features of the voltage and current.In this way,the developed network can extract useful information from the unlabelled samples.The proposed method is validated under different drive cycles and temperature conditions.The results reveal that the SOC estimation accuracy of the DNN trained with both labelled and unlabelled samples outperforms that of only using a limited number of labelled samples.In addition,when the dataset with reduced number of labelled samples to some extent is used to test the developed network,it is found that the proposed method performs well and is robust in producing the model outputs with the required accuracy when the unlabelled samples are involved in the model training.Furthermore,the proposed method is evaluated with different recurrent neural networks(RNNs)applied to the input reconstruction module.The results indicate that the proposed method is feasible for various RNN algorithms,and it could be flexibly applied to other conditions as required. 展开更多
关键词 Deep learning State of charge estimation data-driven methods Battery management system Recurrent neural networks
下载PDF
Boosting battery state of health estimation based on self-supervised learning
16
作者 Yunhong Che Yusheng Zheng +1 位作者 Xin Sui Remus Teodorescu 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2023年第9期335-346,共12页
State of health(SoH) estimation plays a key role in smart battery health prognostic and management.However,poor generalization,lack of labeled data,and unused measurements during aging are still major challenges to ac... State of health(SoH) estimation plays a key role in smart battery health prognostic and management.However,poor generalization,lack of labeled data,and unused measurements during aging are still major challenges to accurate SoH estimation.Toward this end,this paper proposes a self-supervised learning framework to boost the performance of battery SoH estimation.Different from traditional data-driven methods which rely on a considerable training dataset obtained from numerous battery cells,the proposed method achieves accurate and robust estimations using limited labeled data.A filter-based data preprocessing technique,which enables the extraction of partial capacity-voltage curves under dynamic charging profiles,is applied at first.Unsupervised learning is then used to learn the aging characteristics from the unlabeled data through an auto-encoder-decoder.The learned network parameters are transferred to the downstream SoH estimation task and are fine-tuned with very few sparsely labeled data,which boosts the performance of the estimation framework.The proposed method has been validated under different battery chemistries,formats,operating conditions,and ambient.The estimation accuracy can be guaranteed by using only three labeled data from the initial 20% life cycles,with overall errors less than 1.14% and error distribution of all testing scenarios maintaining less than 4%,and robustness increases with aging.Comparisons with other pure supervised machine learning methods demonstrate the superiority of the proposed method.This simple and data-efficient estimation framework is promising in real-world applications under a variety of scenarios. 展开更多
关键词 Lithium-ion battery State of health Battery aging Self-supervised learning Prognostics and health management data-driven estimation
下载PDF
A hybrid physics-informed data-driven neural network for CO_(2) storage in depleted shale reservoirs
17
作者 Yan-Wei Wang Zhen-Xue Dai +3 位作者 Gui-Sheng Wang Li Chen Yu-Zhou Xia Yu-Hao Zhou 《Petroleum Science》 SCIE EI CAS CSCD 2024年第1期286-301,共16页
To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) s... To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) sequestration models do not adequately consider multiple transport mechanisms.Moreover,the evaluation of CO_(2) storage processes usually involves laborious and time-consuming numerical simulations unsuitable for practical prediction and decision-making.In this paper,an integrated model involving gas diffusion,adsorption,dissolution,slip flow,and Darcy flow is proposed to accurately characterize CO_(2) storage in depleted shale reservoirs,supporting the establishment of a training database.On this basis,a hybrid physics-informed data-driven neural network(HPDNN)is developed as a deep learning surrogate for prediction and inversion.By incorporating multiple sources of scientific knowledge,the HPDNN can be configured with limited simulation resources,significantly accelerating the forward and inversion processes.Furthermore,the HPDNN can more intelligently predict injection performance,precisely perform reservoir parameter inversion,and reasonably evaluate the CO_(2) storage capacity under complicated scenarios.The validation and test results demonstrate that the HPDNN can ensure high accuracy and strong robustness across an extensive applicability range when dealing with field data with multiple noise sources.This study has tremendous potential to replace traditional modeling tools for predicting and making decisions about CO_(2) storage projects in depleted shale reservoirs. 展开更多
关键词 Deep learning Physics-informed data-driven neural network Depleted shale reservoirs CO_(2)storage Transport mechanisms
下载PDF
Alternating minimization for data-driven computational elasticity from experimental data: kernel method for learning constitutive manifold
18
作者 Yoshihiro Kanno 《Theoretical & Applied Mechanics Letters》 CSCD 2021年第5期260-265,共6页
Data-driven computing in elasticity attempts to directly use experimental data on material,without constructing an empirical model of the constitutive relation,to predict an equilibrium state of a structure subjected ... Data-driven computing in elasticity attempts to directly use experimental data on material,without constructing an empirical model of the constitutive relation,to predict an equilibrium state of a structure subjected to a specified external load.Provided that a data set comprising stress-strain pairs of material is available,a data-driven method using the kernel method and the regularized least-squares was developed to extract a manifold on which the points in the data set approximately lie(Kanno 2021,Jpn.J.Ind.Appl.Math.).From the perspective of physical experiments,stress field cannot be directly measured,while displacement and force fields are measurable.In this study,we extend the previous kernel method to the situation that pairs of displacement and force,instead of pairs of stress and strain,are available as an input data set.A new regularized least-squares problem is formulated in this problem setting,and an alternating minimization algorithm is proposed to solve the problem. 展开更多
关键词 Alternating minimization Regularized least-squares Kernel method Manifold learning data-driven computing
下载PDF
A Self-Learning Data-Driven Development of Failure Criteria of Unknown Anisotropic Ductile Materials with Deep Learning Neural Network
19
作者 Kyungsuk Jang Gun Jin Yun 《Computers, Materials & Continua》 SCIE EI 2021年第2期1091-1120,共30页
This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure c... This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure criteria of anisotropic ductile materials requires time-consuming tests and manual data evaluation.The proposed method can overcome such practical challenges.The methodology is formalized by combining four ideas:1)The deep learning neural network(DLNN)-based material constitutive model,2)Self-learning inverse finite element(SELIFE)simulation,3)Algorithmic identification of failure points from the selflearned stress-strain curves and 4)Derivation of the failure criteria through symbolic regression of the genetic programming.Stress update and the algorithmic tangent operator were formulated in terms of DLNN parameters for nonlinear finite element analysis.Then,the SELIFE simulation algorithm gradually makes the DLNN model learn highly complex multi-axial stress and strain relationships,being guided by the experimental boundary measurements.Following the failure point identification,a self-learning data-driven failure criteria are eventually developed with the help of a reliable symbolic regression algorithm.The methodology and the self-learning data-driven failure criteria were verified by comparing with a reference failure criteria and simulating with different materials orientations,respectively. 展开更多
关键词 data-driven modeling deep learning neural networks genetic programming anisotropic failure criterion
下载PDF
Development of gradient boosting-assisted machine learning data-driven model for free chlorine residual prediction
20
作者 Wiley Helm Shifa Zhong +2 位作者 Elliot Reid Thomas Igou Yongsheng Chen 《Frontiers of Environmental Science & Engineering》 SCIE EI CSCD 2024年第2期35-46,共12页
Chlorine-based disinfection is ubiquitous in conventional drinking water treatment (DWT) and serves to mitigate threats of acute microbial disease caused by pathogens that may be present in source water. An important ... Chlorine-based disinfection is ubiquitous in conventional drinking water treatment (DWT) and serves to mitigate threats of acute microbial disease caused by pathogens that may be present in source water. An important index of disinfection efficiency is the free chlorine residual (FCR), a regulated disinfection parameter in the US that indirectly measures disinfectant power for prevention of microbial recontamination during DWT and distribution. This work demonstrates how machine learning (ML) can be implemented to improve FCR forecasting when supplied with water quality data from a real, full-scale chlorine disinfection system in Georgia, USA. More precisely, a gradient-boosting ML method (CatBoost) was developed from a full year of DWT plant-generated chlorine disinfection data, including water quality parameters (e.g., temperature, turbidity, pH) and operational process data (e.g., flowrates), to predict FCR. Four gradient-boosting models were implemented, with the highest performance achieving a coefficient of determination, R2, of 0.937. Values that provide explanations using Shapley’s additive method were used to interpret the model’s results, uncovering that standard DWT operating parameters, although non-intuitive and theoretically non-causal, vastly improved prediction performance. These results provide a base case for data-driven DWT disinfection supervision and suggest process monitoring methods to provide better information to plant operators for implementation of safe chlorine dosing to maintain optimum FCR. 展开更多
关键词 Machine learning data-driven modeling Drinking water treatment DISINFECTION CHLORINATION
原文传递
上一页 1 2 6 下一页 到第
使用帮助 返回顶部