For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in...For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.展开更多
Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning ...Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
Recently,orthogonal time frequency space(OTFS)was presented to alleviate severe Doppler effects in high mobility scenarios.Most of the current OTFS detection schemes rely on perfect channel state information(CSI).Howe...Recently,orthogonal time frequency space(OTFS)was presented to alleviate severe Doppler effects in high mobility scenarios.Most of the current OTFS detection schemes rely on perfect channel state information(CSI).However,in real-life systems,the parameters of channels will constantly change,which are often difficult to capture and describe.In this paper,we summarize the existing research on OTFS detection based on data-driven deep learning(DL)and propose three new network structures.The presented three networks include a residual network(ResNet),a dense network(DenseNet),and a residual dense network(RDN)for OTFS detection.The detection schemes based on data-driven paradigms do not require a model that is easy to handle mathematically.Meanwhile,compared with the existing fully connected-deep neural network(FC-DNN)and standard convolutional neural network(CNN),these three new networks can alleviate the problems of gradient explosion and gradient disappearance.Through simulation,it is proved that RDN has the best performance among the three proposed schemes due to the combination of shallow and deep features.RDN can solve the issue of performance loss caused by the traditional network not fully utilizing all the hierarchical information.展开更多
Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research an...Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research and development norm of new materials for energy and environment.This review provides an overview and perspectives on ML methodologies and their applications in membrane design and dis-covery.A brief overview of membrane technologies isfirst provided with the current bottlenecks and potential solutions.Through an appli-cations-based perspective of AI-aided membrane design and discovery,we further show how ML strategies are applied to the membrane discovery cycle(including membrane material design,membrane application,membrane process design,and knowledge extraction),in various membrane systems,ranging from gas,liquid,and fuel cell separation membranes.Furthermore,the best practices of integrating ML methods and specific application targets in membrane design and discovery are presented with an ideal paradigm proposed.The challenges to be addressed and prospects of AI applications in membrane discovery are also highlighted in the end.展开更多
The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinea...The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.展开更多
A comprehensive and precise analysis of shale gas production performance is crucial for evaluating resource potential,designing a field development plan,and making investment decisions.However,quantitative analysis ca...A comprehensive and precise analysis of shale gas production performance is crucial for evaluating resource potential,designing a field development plan,and making investment decisions.However,quantitative analysis can be challenging because production performance is dominated by the complex interaction among a series of geological and engineering factors.In fact,each factor can be viewed as a player who makes cooperative contributions to the production payoff within the constraints of physical laws and models.Inspired by the idea,we propose a hybrid data-driven analysis framework in this study,where the contributions of dominant factors are quantitatively evaluated,the productions are precisely forecasted,and the development optimization suggestions are comprehensively generated.More specifically,game theory and machine learning models are coupled to determine the dominating geological and engineering factors.The Shapley value with definite physical meaning is employed to quantitatively measure the effects of individual factors.A multi-model-fused stacked model is trained for production forecast,which provides the basis for derivative-free optimization algorithms to optimize the development plan.The complete workflow is validated with actual production data collected from the Fuling shale gas field,Sichuan Basin,China.The validation results show that the proposed procedure can draw rigorous conclusions with quantified evidence and thereby provide specific and reliable suggestions for development plan optimization.Comparing with traditional and experience-based approaches,the hybrid data-driven procedure is advanced in terms of both efficiency and accuracy.展开更多
The effectiveness of data-driven learning(DDL) has been testified on Chinese learners by using sample corpora of English articles. The result shows that an independent manipulation of the corpora on the part of learne...The effectiveness of data-driven learning(DDL) has been testified on Chinese learners by using sample corpora of English articles. The result shows that an independent manipulation of the corpora on the part of learner can not ensure the suc cess of DDL.展开更多
A case study has been made to explore whether the teacher’s role in data-driven learning(DDL)can be minimized.The outcome shows that the teacher’s role in offering an explicit instruction may be indispensable and ev...A case study has been made to explore whether the teacher’s role in data-driven learning(DDL)can be minimized.The outcome shows that the teacher’s role in offering an explicit instruction may be indispensable and even central to the acquisi tion of English articles.展开更多
With the rapid development of artificial intelligence technology and increasing material data,machine learning-and artificial intelligence-assisted design of high-performance steel materials is becoming a mainstream p...With the rapid development of artificial intelligence technology and increasing material data,machine learning-and artificial intelligence-assisted design of high-performance steel materials is becoming a mainstream paradigm in materials science.Machine learning methods,based on an interdisciplinary discipline between computer science,statistics and material science,are good at discovering correlations between numerous data points.Compared with the traditional physical modeling method in material science,the main advantage of machine learning is that it overcomes the complex physical mechanisms of the material itself and provides a new perspective for the research and development of novel materials.This review starts with data preprocessing and the introduction of different machine learning models,including algorithm selection and model evaluation.Then,some successful cases of applying machine learning methods in the field of steel research are reviewed based on the main theme of optimizing composition,structure,processing,and performance.The application of machine learning methods to the performance-oriented inverse design of material composition and detection of steel defects is also reviewed.Finally,the applicability and limitations of machine learning in the material field are summarized,and future directions and prospects are discussed.展开更多
In this paper, a reinforcement learning-based multibattery energy storage system(MBESS) scheduling policy is proposed to minimize the consumers ’ electricity cost. The MBESS scheduling problem is modeled as a Markov ...In this paper, a reinforcement learning-based multibattery energy storage system(MBESS) scheduling policy is proposed to minimize the consumers ’ electricity cost. The MBESS scheduling problem is modeled as a Markov decision process(MDP) with unknown transition probability. However, the optimal value function is time-dependent and difficult to obtain because of the periodicity of the electricity price and residential load. Therefore, a series of time-independent action-value functions are proposed to describe every period of a day. To approximate every action-value function, a corresponding critic network is established, which is cascaded with other critic networks according to the time sequence. Then, the continuous management strategy is obtained from the related action network. Moreover, a two-stage learning protocol including offline and online learning stages is provided for detailed implementation in real-time battery management. Numerical experimental examples are given to demonstrate the effectiveness of the developed algorithm.展开更多
During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place i...During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.展开更多
Due to growing concerns regarding climate change and environmental protection,smart power generation has become essential for the economical and safe operation of both conventional thermal power plants and sustainable...Due to growing concerns regarding climate change and environmental protection,smart power generation has become essential for the economical and safe operation of both conventional thermal power plants and sustainable energy.Traditional first-principle model-based methods are becoming insufficient when faced with the ever-growing system scale and its various uncertainties.The burgeoning era of machine learning(ML)and data-driven control(DDC)techniques promises an improved alternative to these outdated methods.This paper reviews typical applications of ML and DDC at the level of monitoring,control,optimization,and fault detection of power generation systems,with a particular focus on uncovering how these methods can function in evaluating,counteracting,or withstanding the effects of the associated uncertainties.A holistic view is provided on the control techniques of smart power generation,from the regulation level to the planning level.The benefits of ML and DDC techniques are accordingly interpreted in terms of visibility,maneuverability,flexibility,profitability,and safety(abbreviated as the“5-TYs”),respectively.Finally,an outlook on future research and applications is presented.展开更多
In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design i...In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design.The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop,while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop.Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters.In the inner-loop,a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement.On this basis,an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space.The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.展开更多
The estimation of state of charge(SOC)using deep neural networks(DNN)generally requires a considerable number of labelled samples for training,which refer to the current and voltage pieces with knowing their correspon...The estimation of state of charge(SOC)using deep neural networks(DNN)generally requires a considerable number of labelled samples for training,which refer to the current and voltage pieces with knowing their corresponding SOCs.However,the collection of labelled samples is costly and time-consuming.In contrast,the unlabelled training samples,which consist of the current and voltage data with unknown SOCs,are easy to obtain.In view of this,this paper proposes an improved DNN for SOC estimation by effectively using both a pool of unlabelled samples and a limited number of labelled samples.Besides the traditional supervised network,the proposed method uses an input reconstruction network to reformulate the time dependency features of the voltage and current.In this way,the developed network can extract useful information from the unlabelled samples.The proposed method is validated under different drive cycles and temperature conditions.The results reveal that the SOC estimation accuracy of the DNN trained with both labelled and unlabelled samples outperforms that of only using a limited number of labelled samples.In addition,when the dataset with reduced number of labelled samples to some extent is used to test the developed network,it is found that the proposed method performs well and is robust in producing the model outputs with the required accuracy when the unlabelled samples are involved in the model training.Furthermore,the proposed method is evaluated with different recurrent neural networks(RNNs)applied to the input reconstruction module.The results indicate that the proposed method is feasible for various RNN algorithms,and it could be flexibly applied to other conditions as required.展开更多
State of health(SoH) estimation plays a key role in smart battery health prognostic and management.However,poor generalization,lack of labeled data,and unused measurements during aging are still major challenges to ac...State of health(SoH) estimation plays a key role in smart battery health prognostic and management.However,poor generalization,lack of labeled data,and unused measurements during aging are still major challenges to accurate SoH estimation.Toward this end,this paper proposes a self-supervised learning framework to boost the performance of battery SoH estimation.Different from traditional data-driven methods which rely on a considerable training dataset obtained from numerous battery cells,the proposed method achieves accurate and robust estimations using limited labeled data.A filter-based data preprocessing technique,which enables the extraction of partial capacity-voltage curves under dynamic charging profiles,is applied at first.Unsupervised learning is then used to learn the aging characteristics from the unlabeled data through an auto-encoder-decoder.The learned network parameters are transferred to the downstream SoH estimation task and are fine-tuned with very few sparsely labeled data,which boosts the performance of the estimation framework.The proposed method has been validated under different battery chemistries,formats,operating conditions,and ambient.The estimation accuracy can be guaranteed by using only three labeled data from the initial 20% life cycles,with overall errors less than 1.14% and error distribution of all testing scenarios maintaining less than 4%,and robustness increases with aging.Comparisons with other pure supervised machine learning methods demonstrate the superiority of the proposed method.This simple and data-efficient estimation framework is promising in real-world applications under a variety of scenarios.展开更多
To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) s...To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) sequestration models do not adequately consider multiple transport mechanisms.Moreover,the evaluation of CO_(2) storage processes usually involves laborious and time-consuming numerical simulations unsuitable for practical prediction and decision-making.In this paper,an integrated model involving gas diffusion,adsorption,dissolution,slip flow,and Darcy flow is proposed to accurately characterize CO_(2) storage in depleted shale reservoirs,supporting the establishment of a training database.On this basis,a hybrid physics-informed data-driven neural network(HPDNN)is developed as a deep learning surrogate for prediction and inversion.By incorporating multiple sources of scientific knowledge,the HPDNN can be configured with limited simulation resources,significantly accelerating the forward and inversion processes.Furthermore,the HPDNN can more intelligently predict injection performance,precisely perform reservoir parameter inversion,and reasonably evaluate the CO_(2) storage capacity under complicated scenarios.The validation and test results demonstrate that the HPDNN can ensure high accuracy and strong robustness across an extensive applicability range when dealing with field data with multiple noise sources.This study has tremendous potential to replace traditional modeling tools for predicting and making decisions about CO_(2) storage projects in depleted shale reservoirs.展开更多
Data-driven computing in elasticity attempts to directly use experimental data on material,without constructing an empirical model of the constitutive relation,to predict an equilibrium state of a structure subjected ...Data-driven computing in elasticity attempts to directly use experimental data on material,without constructing an empirical model of the constitutive relation,to predict an equilibrium state of a structure subjected to a specified external load.Provided that a data set comprising stress-strain pairs of material is available,a data-driven method using the kernel method and the regularized least-squares was developed to extract a manifold on which the points in the data set approximately lie(Kanno 2021,Jpn.J.Ind.Appl.Math.).From the perspective of physical experiments,stress field cannot be directly measured,while displacement and force fields are measurable.In this study,we extend the previous kernel method to the situation that pairs of displacement and force,instead of pairs of stress and strain,are available as an input data set.A new regularized least-squares problem is formulated in this problem setting,and an alternating minimization algorithm is proposed to solve the problem.展开更多
This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure c...This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure criteria of anisotropic ductile materials requires time-consuming tests and manual data evaluation.The proposed method can overcome such practical challenges.The methodology is formalized by combining four ideas:1)The deep learning neural network(DLNN)-based material constitutive model,2)Self-learning inverse finite element(SELIFE)simulation,3)Algorithmic identification of failure points from the selflearned stress-strain curves and 4)Derivation of the failure criteria through symbolic regression of the genetic programming.Stress update and the algorithmic tangent operator were formulated in terms of DLNN parameters for nonlinear finite element analysis.Then,the SELIFE simulation algorithm gradually makes the DLNN model learn highly complex multi-axial stress and strain relationships,being guided by the experimental boundary measurements.Following the failure point identification,a self-learning data-driven failure criteria are eventually developed with the help of a reliable symbolic regression algorithm.The methodology and the self-learning data-driven failure criteria were verified by comparing with a reference failure criteria and simulating with different materials orientations,respectively.展开更多
Chlorine-based disinfection is ubiquitous in conventional drinking water treatment (DWT) and serves to mitigate threats of acute microbial disease caused by pathogens that may be present in source water. An important ...Chlorine-based disinfection is ubiquitous in conventional drinking water treatment (DWT) and serves to mitigate threats of acute microbial disease caused by pathogens that may be present in source water. An important index of disinfection efficiency is the free chlorine residual (FCR), a regulated disinfection parameter in the US that indirectly measures disinfectant power for prevention of microbial recontamination during DWT and distribution. This work demonstrates how machine learning (ML) can be implemented to improve FCR forecasting when supplied with water quality data from a real, full-scale chlorine disinfection system in Georgia, USA. More precisely, a gradient-boosting ML method (CatBoost) was developed from a full year of DWT plant-generated chlorine disinfection data, including water quality parameters (e.g., temperature, turbidity, pH) and operational process data (e.g., flowrates), to predict FCR. Four gradient-boosting models were implemented, with the highest performance achieving a coefficient of determination, R2, of 0.937. Values that provide explanations using Shapley’s additive method were used to interpret the model’s results, uncovering that standard DWT operating parameters, although non-intuitive and theoretically non-causal, vastly improved prediction performance. These results provide a base case for data-driven DWT disinfection supervision and suggest process monitoring methods to provide better information to plant operators for implementation of safe chlorine dosing to maintain optimum FCR.展开更多
基金supported by the National Natural Science Foundation of China (62173333, 12271522)Beijing Natural Science Foundation (Z210002)the Research Fund of Renmin University of China (2021030187)。
文摘For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.
基金supported by the National Natural Science Foundation of China(U21A20166)in part by the Science and Technology Development Foundation of Jilin Province (20230508095RC)+1 种基金in part by the Development and Reform Commission Foundation of Jilin Province (2023C034-3)in part by the Exploration Foundation of State Key Laboratory of Automotive Simulation and Control。
文摘Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
基金supported by Beijing Natural Science Foundation(L223025)National Natural Science Foundation of China(62201067)R and D Program of Beijing Municipal Education Commission(KM202211232008)。
文摘Recently,orthogonal time frequency space(OTFS)was presented to alleviate severe Doppler effects in high mobility scenarios.Most of the current OTFS detection schemes rely on perfect channel state information(CSI).However,in real-life systems,the parameters of channels will constantly change,which are often difficult to capture and describe.In this paper,we summarize the existing research on OTFS detection based on data-driven deep learning(DL)and propose three new network structures.The presented three networks include a residual network(ResNet),a dense network(DenseNet),and a residual dense network(RDN)for OTFS detection.The detection schemes based on data-driven paradigms do not require a model that is easy to handle mathematically.Meanwhile,compared with the existing fully connected-deep neural network(FC-DNN)and standard convolutional neural network(CNN),these three new networks can alleviate the problems of gradient explosion and gradient disappearance.Through simulation,it is proved that RDN has the best performance among the three proposed schemes due to the combination of shallow and deep features.RDN can solve the issue of performance loss caused by the traditional network not fully utilizing all the hierarchical information.
基金This work is supported by the National Key R&D Program of China(No.2022ZD0117501)the Singapore RIE2020 Advanced Manufacturing and Engineering Programmatic Grant by the Agency for Science,Technology and Research(A*STAR)under grant no.A1898b0043Tsinghua University Initiative Scientific Research Program and Low Carbon En-ergy Research Funding Initiative by A*STAR under grant number A-8000182-00-00.
文摘Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research and development norm of new materials for energy and environment.This review provides an overview and perspectives on ML methodologies and their applications in membrane design and dis-covery.A brief overview of membrane technologies isfirst provided with the current bottlenecks and potential solutions.Through an appli-cations-based perspective of AI-aided membrane design and discovery,we further show how ML strategies are applied to the membrane discovery cycle(including membrane material design,membrane application,membrane process design,and knowledge extraction),in various membrane systems,ranging from gas,liquid,and fuel cell separation membranes.Furthermore,the best practices of integrating ML methods and specific application targets in membrane design and discovery are presented with an ideal paradigm proposed.The challenges to be addressed and prospects of AI applications in membrane discovery are also highlighted in the end.
基金supported by the China Postdoctoral Science Foundation(2021M702304)Natural Science Foundation of Shandong Province(ZR20210E260).
文摘The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.
基金This work was supported by the National Natural Science Foundation of China(Grant No.42050104)the Science Foundation of SINOPEC Group(Grant No.P20030).
文摘A comprehensive and precise analysis of shale gas production performance is crucial for evaluating resource potential,designing a field development plan,and making investment decisions.However,quantitative analysis can be challenging because production performance is dominated by the complex interaction among a series of geological and engineering factors.In fact,each factor can be viewed as a player who makes cooperative contributions to the production payoff within the constraints of physical laws and models.Inspired by the idea,we propose a hybrid data-driven analysis framework in this study,where the contributions of dominant factors are quantitatively evaluated,the productions are precisely forecasted,and the development optimization suggestions are comprehensively generated.More specifically,game theory and machine learning models are coupled to determine the dominating geological and engineering factors.The Shapley value with definite physical meaning is employed to quantitatively measure the effects of individual factors.A multi-model-fused stacked model is trained for production forecast,which provides the basis for derivative-free optimization algorithms to optimize the development plan.The complete workflow is validated with actual production data collected from the Fuling shale gas field,Sichuan Basin,China.The validation results show that the proposed procedure can draw rigorous conclusions with quantified evidence and thereby provide specific and reliable suggestions for development plan optimization.Comparing with traditional and experience-based approaches,the hybrid data-driven procedure is advanced in terms of both efficiency and accuracy.
文摘The effectiveness of data-driven learning(DDL) has been testified on Chinese learners by using sample corpora of English articles. The result shows that an independent manipulation of the corpora on the part of learner can not ensure the suc cess of DDL.
文摘A case study has been made to explore whether the teacher’s role in data-driven learning(DDL)can be minimized.The outcome shows that the teacher’s role in offering an explicit instruction may be indispensable and even central to the acquisi tion of English articles.
基金financially supported by the National Natural Science Foundation of China(Nos.52122408,52071023,51901013,and 52101019)the Fundamental Research Funds for the Central Universities(University of Science and Technology Beijing,Nos.FRF-TP-2021-04C1 and 06500135).
文摘With the rapid development of artificial intelligence technology and increasing material data,machine learning-and artificial intelligence-assisted design of high-performance steel materials is becoming a mainstream paradigm in materials science.Machine learning methods,based on an interdisciplinary discipline between computer science,statistics and material science,are good at discovering correlations between numerous data points.Compared with the traditional physical modeling method in material science,the main advantage of machine learning is that it overcomes the complex physical mechanisms of the material itself and provides a new perspective for the research and development of novel materials.This review starts with data preprocessing and the introduction of different machine learning models,including algorithm selection and model evaluation.Then,some successful cases of applying machine learning methods in the field of steel research are reviewed based on the main theme of optimizing composition,structure,processing,and performance.The application of machine learning methods to the performance-oriented inverse design of material composition and detection of steel defects is also reviewed.Finally,the applicability and limitations of machine learning in the material field are summarized,and future directions and prospects are discussed.
基金supported by the National Key R&D Program of China (2018AAA0101400)the National Natural Science Foundation of China (61921004,62173251,U1713209,62236002)+1 种基金the Fundamental Research Funds for the Central UniversitiesGuangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control。
文摘In this paper, a reinforcement learning-based multibattery energy storage system(MBESS) scheduling policy is proposed to minimize the consumers ’ electricity cost. The MBESS scheduling problem is modeled as a Markov decision process(MDP) with unknown transition probability. However, the optimal value function is time-dependent and difficult to obtain because of the periodicity of the electricity price and residential load. Therefore, a series of time-independent action-value functions are proposed to describe every period of a day. To approximate every action-value function, a corresponding critic network is established, which is cascaded with other critic networks according to the time sequence. Then, the continuous management strategy is obtained from the related action network. Moreover, a two-stage learning protocol including offline and online learning stages is provided for detailed implementation in real-time battery management. Numerical experimental examples are given to demonstrate the effectiveness of the developed algorithm.
基金partially supported by the National Natural Science Foundation of China(61751306,61801208,61671233)the Jiangsu Science Foundation(BK20170650)+2 种基金the Postdoctoral Science Foundation of China(BX201700118,2017M621712)the Jiangsu Postdoctoral Science Foundation(1701118B)the Fundamental Research Funds for the Central Universities(021014380094)
文摘During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.
文摘Due to growing concerns regarding climate change and environmental protection,smart power generation has become essential for the economical and safe operation of both conventional thermal power plants and sustainable energy.Traditional first-principle model-based methods are becoming insufficient when faced with the ever-growing system scale and its various uncertainties.The burgeoning era of machine learning(ML)and data-driven control(DDC)techniques promises an improved alternative to these outdated methods.This paper reviews typical applications of ML and DDC at the level of monitoring,control,optimization,and fault detection of power generation systems,with a particular focus on uncovering how these methods can function in evaluating,counteracting,or withstanding the effects of the associated uncertainties.A holistic view is provided on the control techniques of smart power generation,from the regulation level to the planning level.The benefits of ML and DDC techniques are accordingly interpreted in terms of visibility,maneuverability,flexibility,profitability,and safety(abbreviated as the“5-TYs”),respectively.Finally,an outlook on future research and applications is presented.
基金This work was supported in part by the National Natural Science Foundation of China(61903028)the Youth Innovation Promotion Association,Chinese Academy of Sciences(2020137)+1 种基金the Lifelong Learning Machines Program from DARPA/Microsystems Technology Officethe Army Research Laboratory(W911NF-18-2-0260).
文摘In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design.The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop,while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop.Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters.In the inner-loop,a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement.On this basis,an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space.The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.
基金the financial support from the China Scholarship Council(CSC)(No.202207550010)。
文摘The estimation of state of charge(SOC)using deep neural networks(DNN)generally requires a considerable number of labelled samples for training,which refer to the current and voltage pieces with knowing their corresponding SOCs.However,the collection of labelled samples is costly and time-consuming.In contrast,the unlabelled training samples,which consist of the current and voltage data with unknown SOCs,are easy to obtain.In view of this,this paper proposes an improved DNN for SOC estimation by effectively using both a pool of unlabelled samples and a limited number of labelled samples.Besides the traditional supervised network,the proposed method uses an input reconstruction network to reformulate the time dependency features of the voltage and current.In this way,the developed network can extract useful information from the unlabelled samples.The proposed method is validated under different drive cycles and temperature conditions.The results reveal that the SOC estimation accuracy of the DNN trained with both labelled and unlabelled samples outperforms that of only using a limited number of labelled samples.In addition,when the dataset with reduced number of labelled samples to some extent is used to test the developed network,it is found that the proposed method performs well and is robust in producing the model outputs with the required accuracy when the unlabelled samples are involved in the model training.Furthermore,the proposed method is evaluated with different recurrent neural networks(RNNs)applied to the input reconstruction module.The results indicate that the proposed method is feasible for various RNN algorithms,and it could be flexibly applied to other conditions as required.
基金funded by the “SMART BATTERY” project, granted by Villum Foundation in 2021 (project number 222860)。
文摘State of health(SoH) estimation plays a key role in smart battery health prognostic and management.However,poor generalization,lack of labeled data,and unused measurements during aging are still major challenges to accurate SoH estimation.Toward this end,this paper proposes a self-supervised learning framework to boost the performance of battery SoH estimation.Different from traditional data-driven methods which rely on a considerable training dataset obtained from numerous battery cells,the proposed method achieves accurate and robust estimations using limited labeled data.A filter-based data preprocessing technique,which enables the extraction of partial capacity-voltage curves under dynamic charging profiles,is applied at first.Unsupervised learning is then used to learn the aging characteristics from the unlabeled data through an auto-encoder-decoder.The learned network parameters are transferred to the downstream SoH estimation task and are fine-tuned with very few sparsely labeled data,which boosts the performance of the estimation framework.The proposed method has been validated under different battery chemistries,formats,operating conditions,and ambient.The estimation accuracy can be guaranteed by using only three labeled data from the initial 20% life cycles,with overall errors less than 1.14% and error distribution of all testing scenarios maintaining less than 4%,and robustness increases with aging.Comparisons with other pure supervised machine learning methods demonstrate the superiority of the proposed method.This simple and data-efficient estimation framework is promising in real-world applications under a variety of scenarios.
基金This work is funded by National Natural Science Foundation of China(Nos.42202292,42141011)the Program for Jilin University(JLU)Science and Technology Innovative Research Team(No.2019TD-35).The authors would also like to thank the reviewers and editors whose critical comments are very helpful in preparing this article.
文摘To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) sequestration models do not adequately consider multiple transport mechanisms.Moreover,the evaluation of CO_(2) storage processes usually involves laborious and time-consuming numerical simulations unsuitable for practical prediction and decision-making.In this paper,an integrated model involving gas diffusion,adsorption,dissolution,slip flow,and Darcy flow is proposed to accurately characterize CO_(2) storage in depleted shale reservoirs,supporting the establishment of a training database.On this basis,a hybrid physics-informed data-driven neural network(HPDNN)is developed as a deep learning surrogate for prediction and inversion.By incorporating multiple sources of scientific knowledge,the HPDNN can be configured with limited simulation resources,significantly accelerating the forward and inversion processes.Furthermore,the HPDNN can more intelligently predict injection performance,precisely perform reservoir parameter inversion,and reasonably evaluate the CO_(2) storage capacity under complicated scenarios.The validation and test results demonstrate that the HPDNN can ensure high accuracy and strong robustness across an extensive applicability range when dealing with field data with multiple noise sources.This study has tremendous potential to replace traditional modeling tools for predicting and making decisions about CO_(2) storage projects in depleted shale reservoirs.
基金supported by Research Grant from the Kajima Foundation,JST CREST Grant No.JPMJCR1911,JapanJSPS KAKENHI(Nos.17K06633,21K04351).
文摘Data-driven computing in elasticity attempts to directly use experimental data on material,without constructing an empirical model of the constitutive relation,to predict an equilibrium state of a structure subjected to a specified external load.Provided that a data set comprising stress-strain pairs of material is available,a data-driven method using the kernel method and the regularized least-squares was developed to extract a manifold on which the points in the data set approximately lie(Kanno 2021,Jpn.J.Ind.Appl.Math.).From the perspective of physical experiments,stress field cannot be directly measured,while displacement and force fields are measurable.In this study,we extend the previous kernel method to the situation that pairs of displacement and force,instead of pairs of stress and strain,are available as an input data set.A new regularized least-squares problem is formulated in this problem setting,and an alternating minimization algorithm is proposed to solve the problem.
基金the National Research Foundation of Korea(NRF)grant of the Korea government(MSIP)(2020R1A2B5B01001899)(Grantee:GJY,http://www.nrf.re.kr)and Institute of Engineering Research at Seoul National University(Grantee:GJY,http://www.snu.ac.kr).The authors are grateful for their supports.
文摘This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure criteria of anisotropic ductile materials requires time-consuming tests and manual data evaluation.The proposed method can overcome such practical challenges.The methodology is formalized by combining four ideas:1)The deep learning neural network(DLNN)-based material constitutive model,2)Self-learning inverse finite element(SELIFE)simulation,3)Algorithmic identification of failure points from the selflearned stress-strain curves and 4)Derivation of the failure criteria through symbolic regression of the genetic programming.Stress update and the algorithmic tangent operator were formulated in terms of DLNN parameters for nonlinear finite element analysis.Then,the SELIFE simulation algorithm gradually makes the DLNN model learn highly complex multi-axial stress and strain relationships,being guided by the experimental boundary measurements.Following the failure point identification,a self-learning data-driven failure criteria are eventually developed with the help of a reliable symbolic regression algorithm.The methodology and the self-learning data-driven failure criteria were verified by comparing with a reference failure criteria and simulating with different materials orientations,respectively.
基金supported by:US Department of Agriculture’s National Institute of Food and Agriculture,Agriculture and Food Research Initiative,Water for Food Production Systems(No.2018-68011-28371)National Science Foundation(USA)(Nos.1936928,2112533)+1 种基金US Department of Agriculture’National Institute of Food and Agriculture(No.2020-67021-31526)US Environmental Protection Agency(No.840080010).
文摘Chlorine-based disinfection is ubiquitous in conventional drinking water treatment (DWT) and serves to mitigate threats of acute microbial disease caused by pathogens that may be present in source water. An important index of disinfection efficiency is the free chlorine residual (FCR), a regulated disinfection parameter in the US that indirectly measures disinfectant power for prevention of microbial recontamination during DWT and distribution. This work demonstrates how machine learning (ML) can be implemented to improve FCR forecasting when supplied with water quality data from a real, full-scale chlorine disinfection system in Georgia, USA. More precisely, a gradient-boosting ML method (CatBoost) was developed from a full year of DWT plant-generated chlorine disinfection data, including water quality parameters (e.g., temperature, turbidity, pH) and operational process data (e.g., flowrates), to predict FCR. Four gradient-boosting models were implemented, with the highest performance achieving a coefficient of determination, R2, of 0.937. Values that provide explanations using Shapley’s additive method were used to interpret the model’s results, uncovering that standard DWT operating parameters, although non-intuitive and theoretically non-causal, vastly improved prediction performance. These results provide a base case for data-driven DWT disinfection supervision and suggest process monitoring methods to provide better information to plant operators for implementation of safe chlorine dosing to maintain optimum FCR.