For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in...For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.展开更多
Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning ...Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process.展开更多
The effectiveness of data-driven learning(DDL) has been testified on Chinese learners by using sample corpora of English articles. The result shows that an independent manipulation of the corpora on the part of learne...The effectiveness of data-driven learning(DDL) has been testified on Chinese learners by using sample corpora of English articles. The result shows that an independent manipulation of the corpora on the part of learner can not ensure the suc cess of DDL.展开更多
A case study has been made to explore whether the teacher’s role in data-driven learning(DDL)can be minimized.The outcome shows that the teacher’s role in offering an explicit instruction may be indispensable and ev...A case study has been made to explore whether the teacher’s role in data-driven learning(DDL)can be minimized.The outcome shows that the teacher’s role in offering an explicit instruction may be indispensable and even central to the acquisi tion of English articles.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research an...Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research and development norm of new materials for energy and environment.This review provides an overview and perspectives on ML methodologies and their applications in membrane design and dis-covery.A brief overview of membrane technologies isfirst provided with the current bottlenecks and potential solutions.Through an appli-cations-based perspective of AI-aided membrane design and discovery,we further show how ML strategies are applied to the membrane discovery cycle(including membrane material design,membrane application,membrane process design,and knowledge extraction),in various membrane systems,ranging from gas,liquid,and fuel cell separation membranes.Furthermore,the best practices of integrating ML methods and specific application targets in membrane design and discovery are presented with an ideal paradigm proposed.The challenges to be addressed and prospects of AI applications in membrane discovery are also highlighted in the end.展开更多
During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place i...During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.展开更多
Due to growing concerns regarding climate change and environmental protection,smart power generation has become essential for the economical and safe operation of both conventional thermal power plants and sustainable...Due to growing concerns regarding climate change and environmental protection,smart power generation has become essential for the economical and safe operation of both conventional thermal power plants and sustainable energy.Traditional first-principle model-based methods are becoming insufficient when faced with the ever-growing system scale and its various uncertainties.The burgeoning era of machine learning(ML)and data-driven control(DDC)techniques promises an improved alternative to these outdated methods.This paper reviews typical applications of ML and DDC at the level of monitoring,control,optimization,and fault detection of power generation systems,with a particular focus on uncovering how these methods can function in evaluating,counteracting,or withstanding the effects of the associated uncertainties.A holistic view is provided on the control techniques of smart power generation,from the regulation level to the planning level.The benefits of ML and DDC techniques are accordingly interpreted in terms of visibility,maneuverability,flexibility,profitability,and safety(abbreviated as the“5-TYs”),respectively.Finally,an outlook on future research and applications is presented.展开更多
Recently,orthogonal time frequency space(OTFS)was presented to alleviate severe Doppler effects in high mobility scenarios.Most of the current OTFS detection schemes rely on perfect channel state information(CSI).Howe...Recently,orthogonal time frequency space(OTFS)was presented to alleviate severe Doppler effects in high mobility scenarios.Most of the current OTFS detection schemes rely on perfect channel state information(CSI).However,in real-life systems,the parameters of channels will constantly change,which are often difficult to capture and describe.In this paper,we summarize the existing research on OTFS detection based on data-driven deep learning(DL)and propose three new network structures.The presented three networks include a residual network(ResNet),a dense network(DenseNet),and a residual dense network(RDN)for OTFS detection.The detection schemes based on data-driven paradigms do not require a model that is easy to handle mathematically.Meanwhile,compared with the existing fully connected-deep neural network(FC-DNN)and standard convolutional neural network(CNN),these three new networks can alleviate the problems of gradient explosion and gradient disappearance.Through simulation,it is proved that RDN has the best performance among the three proposed schemes due to the combination of shallow and deep features.RDN can solve the issue of performance loss caused by the traditional network not fully utilizing all the hierarchical information.展开更多
In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design i...In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design.The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop,while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop.Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters.In the inner-loop,a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement.On this basis,an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space.The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.展开更多
A comprehensive and precise analysis of shale gas production performance is crucial for evaluating resource potential,designing a field development plan,and making investment decisions.However,quantitative analysis ca...A comprehensive and precise analysis of shale gas production performance is crucial for evaluating resource potential,designing a field development plan,and making investment decisions.However,quantitative analysis can be challenging because production performance is dominated by the complex interaction among a series of geological and engineering factors.In fact,each factor can be viewed as a player who makes cooperative contributions to the production payoff within the constraints of physical laws and models.Inspired by the idea,we propose a hybrid data-driven analysis framework in this study,where the contributions of dominant factors are quantitatively evaluated,the productions are precisely forecasted,and the development optimization suggestions are comprehensively generated.More specifically,game theory and machine learning models are coupled to determine the dominating geological and engineering factors.The Shapley value with definite physical meaning is employed to quantitatively measure the effects of individual factors.A multi-model-fused stacked model is trained for production forecast,which provides the basis for derivative-free optimization algorithms to optimize the development plan.The complete workflow is validated with actual production data collected from the Fuling shale gas field,Sichuan Basin,China.The validation results show that the proposed procedure can draw rigorous conclusions with quantified evidence and thereby provide specific and reliable suggestions for development plan optimization.Comparing with traditional and experience-based approaches,the hybrid data-driven procedure is advanced in terms of both efficiency and accuracy.展开更多
The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinea...The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.展开更多
Data-driven computing in elasticity attempts to directly use experimental data on material,without constructing an empirical model of the constitutive relation,to predict an equilibrium state of a structure subjected ...Data-driven computing in elasticity attempts to directly use experimental data on material,without constructing an empirical model of the constitutive relation,to predict an equilibrium state of a structure subjected to a specified external load.Provided that a data set comprising stress-strain pairs of material is available,a data-driven method using the kernel method and the regularized least-squares was developed to extract a manifold on which the points in the data set approximately lie(Kanno 2021,Jpn.J.Ind.Appl.Math.).From the perspective of physical experiments,stress field cannot be directly measured,while displacement and force fields are measurable.In this study,we extend the previous kernel method to the situation that pairs of displacement and force,instead of pairs of stress and strain,are available as an input data set.A new regularized least-squares problem is formulated in this problem setting,and an alternating minimization algorithm is proposed to solve the problem.展开更多
This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure c...This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure criteria of anisotropic ductile materials requires time-consuming tests and manual data evaluation.The proposed method can overcome such practical challenges.The methodology is formalized by combining four ideas:1)The deep learning neural network(DLNN)-based material constitutive model,2)Self-learning inverse finite element(SELIFE)simulation,3)Algorithmic identification of failure points from the selflearned stress-strain curves and 4)Derivation of the failure criteria through symbolic regression of the genetic programming.Stress update and the algorithmic tangent operator were formulated in terms of DLNN parameters for nonlinear finite element analysis.Then,the SELIFE simulation algorithm gradually makes the DLNN model learn highly complex multi-axial stress and strain relationships,being guided by the experimental boundary measurements.Following the failure point identification,a self-learning data-driven failure criteria are eventually developed with the help of a reliable symbolic regression algorithm.The methodology and the self-learning data-driven failure criteria were verified by comparing with a reference failure criteria and simulating with different materials orientations,respectively.展开更多
Materials development has historically been driven by human needs and desires, and this is likely to con- tinue in the foreseeable future. The global population is expected to reach ten billion by 2050, which will pro...Materials development has historically been driven by human needs and desires, and this is likely to con- tinue in the foreseeable future. The global population is expected to reach ten billion by 2050, which will promote increasingly large demands for clean and high-ef ciency energy, personalized consumer prod- ucts, secure food supplies, and professional healthcare. New functional materials that are made and tai- lored for targeted properties or behaviors will be the key to tackling this challenge. Traditionally, advanced materials are found empirically or through experimental trial-and-error approaches. As big data generated by modern experimental and computational techniques is becoming more readily avail- able, data-driven or machine learning (ML) methods have opened new paradigms for the discovery and rational design of materials. In this review article, we provide a brief introduction on various ML methods and related software or tools. Main ideas and basic procedures for employing ML approaches in materials research are highlighted. We then summarize recent important applications of ML for the large-scale screening and optimal design of polymer and porous materials, catalytic materials, and energetic mate- rials. Finally, concluding remarks and an outlook are provided.展开更多
With the rapid development of artificial intelligence technology and increasing material data,machine learning-and artificial intelligence-assisted design of high-performance steel materials is becoming a mainstream p...With the rapid development of artificial intelligence technology and increasing material data,machine learning-and artificial intelligence-assisted design of high-performance steel materials is becoming a mainstream paradigm in materials science.Machine learning methods,based on an interdisciplinary discipline between computer science,statistics and material science,are good at discovering correlations between numerous data points.Compared with the traditional physical modeling method in material science,the main advantage of machine learning is that it overcomes the complex physical mechanisms of the material itself and provides a new perspective for the research and development of novel materials.This review starts with data preprocessing and the introduction of different machine learning models,including algorithm selection and model evaluation.Then,some successful cases of applying machine learning methods in the field of steel research are reviewed based on the main theme of optimizing composition,structure,processing,and performance.The application of machine learning methods to the performance-oriented inverse design of material composition and detection of steel defects is also reviewed.Finally,the applicability and limitations of machine learning in the material field are summarized,and future directions and prospects are discussed.展开更多
Sheet metal forming technologies have been intensively studied for decades to meet the increasing demand for lightweight metal components.To surmount the springback occurring in sheet metal forming processes,numerous ...Sheet metal forming technologies have been intensively studied for decades to meet the increasing demand for lightweight metal components.To surmount the springback occurring in sheet metal forming processes,numerous studies have been performed to develop compensation methods.However,for most existing methods,the development cycle is still considerably time-consumptive and demands high computational or capital cost.In this paper,a novel theory-guided regularization method for training of deep neural networks(DNNs),implanted in a learning system,is introduced to learn the intrinsic relationship between the workpiece shape after springback and the required process parameter,e.g.,loading stroke,in sheet metal bending processes.By directly bridging the workpiece shape to the process parameter,issues concerning springback in the process design would be circumvented.The novel regularization method utilizes the well-recognized theories in material mechanics,Swift’s law,by penalizing divergence from this law throughout the network training process.The regularization is implemented by a multi-task learning network architecture,with the learning of extra tasks regularized during training.The stress-strain curve describing the material properties and the prior knowledge used to guide learning are stored in the database and the knowledge base,respectively.One can obtain the predicted loading stroke for a new workpiece shape by importing the target geometry through the user interface.In this research,the neural models were found to outperform a traditional machine learning model,support vector regression model,in experiments with different amount of training data.Through a series of studies with varying conditions of training data structure and amount,workpiece material and applied bending processes,the theory-guided DNN has been shown to achieve superior generalization and learning consistency than the data-driven DNNs,especially when only scarce and scattered experiment data are available for training which is often the case in practice.The theory-guided DNN could also be applicable to other sheet metal forming processes.It provides an alternative method for compensating springback with significantly shorter development cycle and less capital cost and computational requirement than traditional compensation methods in sheet metal forming industry.展开更多
The liquid loading is one of the most frequently encountered phenomena in the transportation of gas pipeline,reducing the transmission efficiency and threatening the flow assurance.However,most of the traditional mech...The liquid loading is one of the most frequently encountered phenomena in the transportation of gas pipeline,reducing the transmission efficiency and threatening the flow assurance.However,most of the traditional mechanism models are semi-empirical models,and have to be resolved under different working conditions with complex calculation process.The development of big data technology and artificial intelligence provides the possibility to establish data-driven models.This paper aims to establish a liquid loading prediction model for natural gas pipeline with high generalization ability based on machine learning.First,according to the characteristics of actual gas pipeline,a variety of reasonable combinations of working conditions such as different gas velocity,pipe diameters,water contents and outlet pressures were set,and multiple undulating pipeline topography with different elevation differences was established.Then a large number of simulations were performed by simulator OLGA to obtain the data required for machine learning.After data preprocessing,six supervised learning algorithms,including support vector machine(SVM),decision tree(DT),random forest(RF),artificial neural network(ANN),plain Bayesian classification(NBC),and K nearest neighbor algorithm(KNN),were compared to evaluate the performance of liquid loading prediction.Finally,the RF and KNN with better performance were selected for parameter tuning and then used to the actual pipeline for liquid loading location prediction.Compared with OLGA simulation,the established data-driven model not only improves calculation efficiency and reduces workload,but also can provide technical support for gas pipeline flow assurance.展开更多
Advanced technologies are required in future mobile wireless networks to support services with highly diverse requirements in terms of high data rate and reliability,low latency,and massive access.Deep Learning(DL),on...Advanced technologies are required in future mobile wireless networks to support services with highly diverse requirements in terms of high data rate and reliability,low latency,and massive access.Deep Learning(DL),one of the most exciting developments in machine learning and big data,has recently shown great potential in the study of wireless communications.In this article,we provide a literature review on the applications of DL in the physical layer.First,we analyze the limitations of existing signal processing techniques in terms of model accuracy,global optimality,and computational scalability.Next,we provide a brief review of classical DL frameworks.Subsequently,we discuss recent DL-based physical layer technologies,including both DL-based signal processing modules and end-to-end systems.Deep neural networks are used to replace a single or several conventional functional modules,whereas the objective of the latter is to replace the entire transceiver structure.Lastly,we discuss the open issues and research directions of the DL-based physical layer in terms of model complexity,data quality,data representation,and algorithm reliability.展开更多
The estimation of state of charge(SOC)using deep neural networks(DNN)generally requires a considerable number of labelled samples for training,which refer to the current and voltage pieces with knowing their correspon...The estimation of state of charge(SOC)using deep neural networks(DNN)generally requires a considerable number of labelled samples for training,which refer to the current and voltage pieces with knowing their corresponding SOCs.However,the collection of labelled samples is costly and time-consuming.In contrast,the unlabelled training samples,which consist of the current and voltage data with unknown SOCs,are easy to obtain.In view of this,this paper proposes an improved DNN for SOC estimation by effectively using both a pool of unlabelled samples and a limited number of labelled samples.Besides the traditional supervised network,the proposed method uses an input reconstruction network to reformulate the time dependency features of the voltage and current.In this way,the developed network can extract useful information from the unlabelled samples.The proposed method is validated under different drive cycles and temperature conditions.The results reveal that the SOC estimation accuracy of the DNN trained with both labelled and unlabelled samples outperforms that of only using a limited number of labelled samples.In addition,when the dataset with reduced number of labelled samples to some extent is used to test the developed network,it is found that the proposed method performs well and is robust in producing the model outputs with the required accuracy when the unlabelled samples are involved in the model training.Furthermore,the proposed method is evaluated with different recurrent neural networks(RNNs)applied to the input reconstruction module.The results indicate that the proposed method is feasible for various RNN algorithms,and it could be flexibly applied to other conditions as required.展开更多
基金supported by the National Natural Science Foundation of China (62173333, 12271522)Beijing Natural Science Foundation (Z210002)the Research Fund of Renmin University of China (2021030187)。
文摘For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.
基金supported by the National Natural Science Foundation of China(U21A20166)in part by the Science and Technology Development Foundation of Jilin Province (20230508095RC)+1 种基金in part by the Development and Reform Commission Foundation of Jilin Province (2023C034-3)in part by the Exploration Foundation of State Key Laboratory of Automotive Simulation and Control。
文摘Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process.
文摘The effectiveness of data-driven learning(DDL) has been testified on Chinese learners by using sample corpora of English articles. The result shows that an independent manipulation of the corpora on the part of learner can not ensure the suc cess of DDL.
文摘A case study has been made to explore whether the teacher’s role in data-driven learning(DDL)can be minimized.The outcome shows that the teacher’s role in offering an explicit instruction may be indispensable and even central to the acquisi tion of English articles.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
基金This work is supported by the National Key R&D Program of China(No.2022ZD0117501)the Singapore RIE2020 Advanced Manufacturing and Engineering Programmatic Grant by the Agency for Science,Technology and Research(A*STAR)under grant no.A1898b0043Tsinghua University Initiative Scientific Research Program and Low Carbon En-ergy Research Funding Initiative by A*STAR under grant number A-8000182-00-00.
文摘Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research and development norm of new materials for energy and environment.This review provides an overview and perspectives on ML methodologies and their applications in membrane design and dis-covery.A brief overview of membrane technologies isfirst provided with the current bottlenecks and potential solutions.Through an appli-cations-based perspective of AI-aided membrane design and discovery,we further show how ML strategies are applied to the membrane discovery cycle(including membrane material design,membrane application,membrane process design,and knowledge extraction),in various membrane systems,ranging from gas,liquid,and fuel cell separation membranes.Furthermore,the best practices of integrating ML methods and specific application targets in membrane design and discovery are presented with an ideal paradigm proposed.The challenges to be addressed and prospects of AI applications in membrane discovery are also highlighted in the end.
基金partially supported by the National Natural Science Foundation of China(61751306,61801208,61671233)the Jiangsu Science Foundation(BK20170650)+2 种基金the Postdoctoral Science Foundation of China(BX201700118,2017M621712)the Jiangsu Postdoctoral Science Foundation(1701118B)the Fundamental Research Funds for the Central Universities(021014380094)
文摘During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.
文摘Due to growing concerns regarding climate change and environmental protection,smart power generation has become essential for the economical and safe operation of both conventional thermal power plants and sustainable energy.Traditional first-principle model-based methods are becoming insufficient when faced with the ever-growing system scale and its various uncertainties.The burgeoning era of machine learning(ML)and data-driven control(DDC)techniques promises an improved alternative to these outdated methods.This paper reviews typical applications of ML and DDC at the level of monitoring,control,optimization,and fault detection of power generation systems,with a particular focus on uncovering how these methods can function in evaluating,counteracting,or withstanding the effects of the associated uncertainties.A holistic view is provided on the control techniques of smart power generation,from the regulation level to the planning level.The benefits of ML and DDC techniques are accordingly interpreted in terms of visibility,maneuverability,flexibility,profitability,and safety(abbreviated as the“5-TYs”),respectively.Finally,an outlook on future research and applications is presented.
基金supported by Beijing Natural Science Foundation(L223025)National Natural Science Foundation of China(62201067)R and D Program of Beijing Municipal Education Commission(KM202211232008)。
文摘Recently,orthogonal time frequency space(OTFS)was presented to alleviate severe Doppler effects in high mobility scenarios.Most of the current OTFS detection schemes rely on perfect channel state information(CSI).However,in real-life systems,the parameters of channels will constantly change,which are often difficult to capture and describe.In this paper,we summarize the existing research on OTFS detection based on data-driven deep learning(DL)and propose three new network structures.The presented three networks include a residual network(ResNet),a dense network(DenseNet),and a residual dense network(RDN)for OTFS detection.The detection schemes based on data-driven paradigms do not require a model that is easy to handle mathematically.Meanwhile,compared with the existing fully connected-deep neural network(FC-DNN)and standard convolutional neural network(CNN),these three new networks can alleviate the problems of gradient explosion and gradient disappearance.Through simulation,it is proved that RDN has the best performance among the three proposed schemes due to the combination of shallow and deep features.RDN can solve the issue of performance loss caused by the traditional network not fully utilizing all the hierarchical information.
基金This work was supported in part by the National Natural Science Foundation of China(61903028)the Youth Innovation Promotion Association,Chinese Academy of Sciences(2020137)+1 种基金the Lifelong Learning Machines Program from DARPA/Microsystems Technology Officethe Army Research Laboratory(W911NF-18-2-0260).
文摘In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design.The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop,while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop.Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters.In the inner-loop,a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement.On this basis,an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space.The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.
基金This work was supported by the National Natural Science Foundation of China(Grant No.42050104)the Science Foundation of SINOPEC Group(Grant No.P20030).
文摘A comprehensive and precise analysis of shale gas production performance is crucial for evaluating resource potential,designing a field development plan,and making investment decisions.However,quantitative analysis can be challenging because production performance is dominated by the complex interaction among a series of geological and engineering factors.In fact,each factor can be viewed as a player who makes cooperative contributions to the production payoff within the constraints of physical laws and models.Inspired by the idea,we propose a hybrid data-driven analysis framework in this study,where the contributions of dominant factors are quantitatively evaluated,the productions are precisely forecasted,and the development optimization suggestions are comprehensively generated.More specifically,game theory and machine learning models are coupled to determine the dominating geological and engineering factors.The Shapley value with definite physical meaning is employed to quantitatively measure the effects of individual factors.A multi-model-fused stacked model is trained for production forecast,which provides the basis for derivative-free optimization algorithms to optimize the development plan.The complete workflow is validated with actual production data collected from the Fuling shale gas field,Sichuan Basin,China.The validation results show that the proposed procedure can draw rigorous conclusions with quantified evidence and thereby provide specific and reliable suggestions for development plan optimization.Comparing with traditional and experience-based approaches,the hybrid data-driven procedure is advanced in terms of both efficiency and accuracy.
基金supported by the China Postdoctoral Science Foundation(2021M702304)Natural Science Foundation of Shandong Province(ZR20210E260).
文摘The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.
基金supported by Research Grant from the Kajima Foundation,JST CREST Grant No.JPMJCR1911,JapanJSPS KAKENHI(Nos.17K06633,21K04351).
文摘Data-driven computing in elasticity attempts to directly use experimental data on material,without constructing an empirical model of the constitutive relation,to predict an equilibrium state of a structure subjected to a specified external load.Provided that a data set comprising stress-strain pairs of material is available,a data-driven method using the kernel method and the regularized least-squares was developed to extract a manifold on which the points in the data set approximately lie(Kanno 2021,Jpn.J.Ind.Appl.Math.).From the perspective of physical experiments,stress field cannot be directly measured,while displacement and force fields are measurable.In this study,we extend the previous kernel method to the situation that pairs of displacement and force,instead of pairs of stress and strain,are available as an input data set.A new regularized least-squares problem is formulated in this problem setting,and an alternating minimization algorithm is proposed to solve the problem.
基金the National Research Foundation of Korea(NRF)grant of the Korea government(MSIP)(2020R1A2B5B01001899)(Grantee:GJY,http://www.nrf.re.kr)and Institute of Engineering Research at Seoul National University(Grantee:GJY,http://www.snu.ac.kr).The authors are grateful for their supports.
文摘This paper first proposes a new self-learning data-driven methodology that can develop the failure criteria of unknown anisotropic ductile materials from the minimal number of experimental tests.Establishing failure criteria of anisotropic ductile materials requires time-consuming tests and manual data evaluation.The proposed method can overcome such practical challenges.The methodology is formalized by combining four ideas:1)The deep learning neural network(DLNN)-based material constitutive model,2)Self-learning inverse finite element(SELIFE)simulation,3)Algorithmic identification of failure points from the selflearned stress-strain curves and 4)Derivation of the failure criteria through symbolic regression of the genetic programming.Stress update and the algorithmic tangent operator were formulated in terms of DLNN parameters for nonlinear finite element analysis.Then,the SELIFE simulation algorithm gradually makes the DLNN model learn highly complex multi-axial stress and strain relationships,being guided by the experimental boundary measurements.Following the failure point identification,a self-learning data-driven failure criteria are eventually developed with the help of a reliable symbolic regression algorithm.The methodology and the self-learning data-driven failure criteria were verified by comparing with a reference failure criteria and simulating with different materials orientations,respectively.
文摘Materials development has historically been driven by human needs and desires, and this is likely to con- tinue in the foreseeable future. The global population is expected to reach ten billion by 2050, which will promote increasingly large demands for clean and high-ef ciency energy, personalized consumer prod- ucts, secure food supplies, and professional healthcare. New functional materials that are made and tai- lored for targeted properties or behaviors will be the key to tackling this challenge. Traditionally, advanced materials are found empirically or through experimental trial-and-error approaches. As big data generated by modern experimental and computational techniques is becoming more readily avail- able, data-driven or machine learning (ML) methods have opened new paradigms for the discovery and rational design of materials. In this review article, we provide a brief introduction on various ML methods and related software or tools. Main ideas and basic procedures for employing ML approaches in materials research are highlighted. We then summarize recent important applications of ML for the large-scale screening and optimal design of polymer and porous materials, catalytic materials, and energetic mate- rials. Finally, concluding remarks and an outlook are provided.
基金financially supported by the National Natural Science Foundation of China(Nos.52122408,52071023,51901013,and 52101019)the Fundamental Research Funds for the Central Universities(University of Science and Technology Beijing,Nos.FRF-TP-2021-04C1 and 06500135).
文摘With the rapid development of artificial intelligence technology and increasing material data,machine learning-and artificial intelligence-assisted design of high-performance steel materials is becoming a mainstream paradigm in materials science.Machine learning methods,based on an interdisciplinary discipline between computer science,statistics and material science,are good at discovering correlations between numerous data points.Compared with the traditional physical modeling method in material science,the main advantage of machine learning is that it overcomes the complex physical mechanisms of the material itself and provides a new perspective for the research and development of novel materials.This review starts with data preprocessing and the introduction of different machine learning models,including algorithm selection and model evaluation.Then,some successful cases of applying machine learning methods in the field of steel research are reviewed based on the main theme of optimizing composition,structure,processing,and performance.The application of machine learning methods to the performance-oriented inverse design of material composition and detection of steel defects is also reviewed.Finally,the applicability and limitations of machine learning in the material field are summarized,and future directions and prospects are discussed.
基金supported by Aviation Industry Corporation of China(AVIC)Manufacturing Technology Institute(MTI)and in part by China Scholarship Council(CSC)(201908060236)。
文摘Sheet metal forming technologies have been intensively studied for decades to meet the increasing demand for lightweight metal components.To surmount the springback occurring in sheet metal forming processes,numerous studies have been performed to develop compensation methods.However,for most existing methods,the development cycle is still considerably time-consumptive and demands high computational or capital cost.In this paper,a novel theory-guided regularization method for training of deep neural networks(DNNs),implanted in a learning system,is introduced to learn the intrinsic relationship between the workpiece shape after springback and the required process parameter,e.g.,loading stroke,in sheet metal bending processes.By directly bridging the workpiece shape to the process parameter,issues concerning springback in the process design would be circumvented.The novel regularization method utilizes the well-recognized theories in material mechanics,Swift’s law,by penalizing divergence from this law throughout the network training process.The regularization is implemented by a multi-task learning network architecture,with the learning of extra tasks regularized during training.The stress-strain curve describing the material properties and the prior knowledge used to guide learning are stored in the database and the knowledge base,respectively.One can obtain the predicted loading stroke for a new workpiece shape by importing the target geometry through the user interface.In this research,the neural models were found to outperform a traditional machine learning model,support vector regression model,in experiments with different amount of training data.Through a series of studies with varying conditions of training data structure and amount,workpiece material and applied bending processes,the theory-guided DNN has been shown to achieve superior generalization and learning consistency than the data-driven DNNs,especially when only scarce and scattered experiment data are available for training which is often the case in practice.The theory-guided DNN could also be applicable to other sheet metal forming processes.It provides an alternative method for compensating springback with significantly shorter development cycle and less capital cost and computational requirement than traditional compensation methods in sheet metal forming industry.
基金supported by the National Science and Technology Major Project of China(2016ZX05066005-001)Zhejiang Province Key Research and Development Plan(2021C03152)Zhoushan Science and Technology Project(2021C21011)
文摘The liquid loading is one of the most frequently encountered phenomena in the transportation of gas pipeline,reducing the transmission efficiency and threatening the flow assurance.However,most of the traditional mechanism models are semi-empirical models,and have to be resolved under different working conditions with complex calculation process.The development of big data technology and artificial intelligence provides the possibility to establish data-driven models.This paper aims to establish a liquid loading prediction model for natural gas pipeline with high generalization ability based on machine learning.First,according to the characteristics of actual gas pipeline,a variety of reasonable combinations of working conditions such as different gas velocity,pipe diameters,water contents and outlet pressures were set,and multiple undulating pipeline topography with different elevation differences was established.Then a large number of simulations were performed by simulator OLGA to obtain the data required for machine learning.After data preprocessing,six supervised learning algorithms,including support vector machine(SVM),decision tree(DT),random forest(RF),artificial neural network(ANN),plain Bayesian classification(NBC),and K nearest neighbor algorithm(KNN),were compared to evaluate the performance of liquid loading prediction.Finally,the RF and KNN with better performance were selected for parameter tuning and then used to the actual pipeline for liquid loading location prediction.Compared with OLGA simulation,the established data-driven model not only improves calculation efficiency and reduces workload,but also can provide technical support for gas pipeline flow assurance.
基金supported by the National Natural Science Foundation of China under Grants 61801208,61931023,and U1936202.
文摘Advanced technologies are required in future mobile wireless networks to support services with highly diverse requirements in terms of high data rate and reliability,low latency,and massive access.Deep Learning(DL),one of the most exciting developments in machine learning and big data,has recently shown great potential in the study of wireless communications.In this article,we provide a literature review on the applications of DL in the physical layer.First,we analyze the limitations of existing signal processing techniques in terms of model accuracy,global optimality,and computational scalability.Next,we provide a brief review of classical DL frameworks.Subsequently,we discuss recent DL-based physical layer technologies,including both DL-based signal processing modules and end-to-end systems.Deep neural networks are used to replace a single or several conventional functional modules,whereas the objective of the latter is to replace the entire transceiver structure.Lastly,we discuss the open issues and research directions of the DL-based physical layer in terms of model complexity,data quality,data representation,and algorithm reliability.
基金the financial support from the China Scholarship Council(CSC)(No.202207550010)。
文摘The estimation of state of charge(SOC)using deep neural networks(DNN)generally requires a considerable number of labelled samples for training,which refer to the current and voltage pieces with knowing their corresponding SOCs.However,the collection of labelled samples is costly and time-consuming.In contrast,the unlabelled training samples,which consist of the current and voltage data with unknown SOCs,are easy to obtain.In view of this,this paper proposes an improved DNN for SOC estimation by effectively using both a pool of unlabelled samples and a limited number of labelled samples.Besides the traditional supervised network,the proposed method uses an input reconstruction network to reformulate the time dependency features of the voltage and current.In this way,the developed network can extract useful information from the unlabelled samples.The proposed method is validated under different drive cycles and temperature conditions.The results reveal that the SOC estimation accuracy of the DNN trained with both labelled and unlabelled samples outperforms that of only using a limited number of labelled samples.In addition,when the dataset with reduced number of labelled samples to some extent is used to test the developed network,it is found that the proposed method performs well and is robust in producing the model outputs with the required accuracy when the unlabelled samples are involved in the model training.Furthermore,the proposed method is evaluated with different recurrent neural networks(RNNs)applied to the input reconstruction module.The results indicate that the proposed method is feasible for various RNN algorithms,and it could be flexibly applied to other conditions as required.