In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory...In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
The gasoline inline blending process has widely used real-time optimization techniques to achieve optimization objectives,such as minimizing the cost of production.However,the effectiveness of real-time optimization i...The gasoline inline blending process has widely used real-time optimization techniques to achieve optimization objectives,such as minimizing the cost of production.However,the effectiveness of real-time optimization in gasoline blending relies on accurate blending models and is challenged by stochastic disturbances.Thus,we propose a real-time optimization algorithm based on the soft actor-critic(SAC)deep reinforcement learning strategy to optimize gasoline blending without relying on a single blending model and to be robust against disturbances.Our approach constructs the environment using nonlinear blending models and feedstocks with disturbances.The algorithm incorporates the Lagrange multiplier and path constraints in reward design to manage sparse product constraints.Carefully abstracted states facilitate algorithm convergence,and the normalized action vector in each optimization period allows the agent to generalize to some extent across different target production scenarios.Through these well-designed components,the algorithm based on the SAC outperforms real-time optimization methods based on either nonlinear or linear programming.It even demonstrates comparable performance with the time-horizon based real-time optimization method,which requires knowledge of uncertainty models,confirming its capability to handle uncertainty without accurate models.Our simulation illustrates a promising approach to free real-time optimization of the gasoline blending process from uncertainty models that are difficult to acquire in practice.展开更多
This work proposes a recorded recurrent twin delayed deep deterministic(RRTD3)policy gradient algorithm to solve the challenge of constructing guidance laws for intercepting endoatmospheric maneuvering missiles with u...This work proposes a recorded recurrent twin delayed deep deterministic(RRTD3)policy gradient algorithm to solve the challenge of constructing guidance laws for intercepting endoatmospheric maneuvering missiles with uncertainties and observation noise.The attack-defense engagement scenario is modeled as a partially observable Markov decision process(POMDP).Given the benefits of recurrent neural networks(RNNs)in processing sequence information,an RNN layer is incorporated into the agent’s policy network to alleviate the bottleneck of traditional deep reinforcement learning methods while dealing with POMDPs.The measurements from the interceptor’s seeker during each guidance cycle are combined into one sequence as the input to the policy network since the detection frequency of an interceptor is usually higher than its guidance frequency.During training,the hidden states of the RNN layer in the policy network are recorded to overcome the partially observable problem that this RNN layer causes inside the agent.The training curves show that the proposed RRTD3 successfully enhances data efficiency,training speed,and training stability.The test results confirm the advantages of the RRTD3-based guidance laws over some conventional guidance laws.展开更多
In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep rei...In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy.展开更多
Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventio...Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering.展开更多
The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to d...The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to databit width. Reducing the data bit width will result in a loss of accuracy. Therefore, it is difficult to determinethe optimal bit width for different parts of the network with guaranteed accuracy. Mixed precision quantizationcan effectively reduce the amount of computation while keeping the model accuracy basically unchanged. In thispaper, a hardware-aware mixed precision quantization strategy optimal assignment algorithm adapted to low bitwidth is proposed, and reinforcement learning is used to automatically predict the mixed precision that meets theconstraints of hardware resources. In the state-space design, the standard deviation of weights is used to measurethe distribution difference of data, the execution speed feedback of simulated neural network accelerator inferenceis used as the environment to limit the action space of the agent, and the accuracy of the quantization model afterretraining is used as the reward function to guide the agent to carry out deep reinforcement learning training. Theexperimental results show that the proposed method obtains a suitable model layer-by-layer quantization strategyunder the condition that the computational resources are satisfied, and themodel accuracy is effectively improved.The proposed method has strong intelligence and certain universality and has strong application potential in thefield of mixed precision quantization and embedded neural network model deployment.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learni...Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy.展开更多
This paper describes the self—adjustment of some tuning-knobs of the generalized predictive controller(GPC).A three feedforward neural network was utilized to on line learn two key tuning-knobs of GPC,and BP algorith...This paper describes the self—adjustment of some tuning-knobs of the generalized predictive controller(GPC).A three feedforward neural network was utilized to on line learn two key tuning-knobs of GPC,and BP algorithm was used for the training of the linking-weights of the neural network.Hence it gets rid of the difficulty of choosing these tuning-knobs manually and provides easier condition for the wide applications of GPC on industrial plants.Simulation results illustrated the effectiveness of the method.展开更多
Reinforcement learning is an excellent approach which is used in artificial intelligence,automatic control, etc. However, ordinary reinforcement learning algorithm, such as Q-learning with lookup table cannot cope wit...Reinforcement learning is an excellent approach which is used in artificial intelligence,automatic control, etc. However, ordinary reinforcement learning algorithm, such as Q-learning with lookup table cannot cope with extremely complex and dynamic environment due to the huge state space. To reduce the state space, modular neural network Q-learning algorithm is proposed, which combines Q-learning algorithm with neural network and module method. Forward feedback neural network, Elman neural network and radius-basis neural network are separately employed to construct such algorithm. It is revealed that Elman neural network Q-learning algorithm has the best performance under the condition that the same neural network training method, i.e. gradient descent error back-propagation algorithm is applied.展开更多
Creating new molecules with desired properties is a fundamental and challenging problem in chemistry. Reinforcement learning (RL) has shown its utility in this area where the target chemical property values can serve ...Creating new molecules with desired properties is a fundamental and challenging problem in chemistry. Reinforcement learning (RL) has shown its utility in this area where the target chemical property values can serve as a reward signal. At each step of making a new molecule, the RL agent learns selecting an action from a list of many chemically valid actions for a given molecule, implying a great uncertainty associated with its learning. In a traditional implementation of deep RL algorithms, deterministic neural networks are typically employed, thus allowing the agent to choose one action from one sampled action at each step. In this paper, we proposed a new strategy of applying Bayesian neural networks to RL to reduce uncertainty so that the agent can choose one action from a pool of sampled actions at each step, and investigated its benefits in molecule design. Our experiments suggested the Bayesian approach could create molecules of desirable chemical quality while maintained their diversity, a very difficult goal to achieve in machine learning of molecules. We further exploited their diversity by using them to train a generative model to yield more novel drug-like molecules, which were absent in the training molecules as we know novelty is essential for drug candidate molecules. In conclusion, Bayesian approach could offer a balance between exploitation and exploration in RL, and a balance between optimization and diversity in molecule design.展开更多
Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task schedu...Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task scheduling problem in the hierarchically deployed edge cloud.We first formulate the minimization of the service time of scheduled tasks in edge cloud as a combinatorial optimization problem,blue and then prove the NP-hardness of the problem.Different from the existing work that mostly designs heuristic approximation-based algorithms or policies to make scheduling decision,we propose a newly designed scheduling policy,named Joint Neural Network and Heuristic Scheduling(JNNHSP),which combines a neural network-based method with a heuristic based solution.JNNHSP takes the Sequence-to-Sequence(Seq2Seq)model trained by Reinforcement Learning(RL)as the primary policy and adopts the heuristic algorithm as the auxiliary policy to obtain the scheduling solution,thereby achieving a good balance between the quality and the efficiency of the scheduling solution.In-depth experiments show that compared with a variety of related policies and optimization solvers,JNNHSP can achieve better performance in terms of scheduling error ratio,the degree to which the policy is affected by re-sources limitations,average service latency,and execution efficiency in a typical hierarchical edge cloud.展开更多
This paper presents a machine-learning-based speedup strategy for real-time implementation of model-predictive-control(MPC)in emergency voltage stabilization of power systems.Despite success in various applications,re...This paper presents a machine-learning-based speedup strategy for real-time implementation of model-predictive-control(MPC)in emergency voltage stabilization of power systems.Despite success in various applications,real-time implementation of MPC in power systems has not been successful due to the online control computation time required for large-sized complex systems,and in power systems,the computation time exceeds the available decision time used in practice by a large extent.This long-standing problem is addressed here by developing a novel MPC-based framework that i)computes an optimal strategy for nominal loads in an offline setting and adapts it for real-time scenarios by successive online control corrections at each control instant utilizing the latest measurements,and ii)employs a machine-learning based approach for the prediction of voltage trajectory and its sensitivity to control inputs,thereby accelerating the overall control computation by multiple times.Additionally,a realistic control coordination scheme among static var compensators(SVC),load-shedding(LS),and load tap-changers(LTC)is presented that incorporates the practical delayed actions of the LTCs.The performance of the proposed scheme is validated for IEEE 9-bus and 39-bus systems,with±20%variations in nominal loading conditions together with contingencies.We show that our proposed methodology speeds up the online computation by 20-fold,bringing it down to a practically feasible value(fraction of a second),making the MPC real-time and feasible for power system control for the first time.展开更多
Reinforcement Learning(RL)based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it.Guided by the rewards generated by environment,a RL agent can lear...Reinforcement Learning(RL)based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it.Guided by the rewards generated by environment,a RL agent can learn the control strategy directly in a model-free way instead of investigating the dynamic model of the environment.In the paper,we propose the sampled-data RL control strategy to reduce the computational demand.In the sampled-data control strategy,the whole control system is of a hybrid structure,in which the plant is of continuous structure while the controller(RL agent)adopts a discrete structure.Given that the continuous states of the plant will be the input of the agent,the state–action value function is approximated by the fully connected feed-forward neural networks(FCFFNN).Instead of learning the controller at every step during the interaction with the environment,the learning and acting stages are decoupled to learn the control strategy more effectively through experience replay.In the acting stage,the most effective experience obtained during the interaction with the environment will be stored and during the learning stage,the stored experience will be replayed to customized times,which helps enhance the experience replay process.The effectiveness of proposed approach will be verified by simulation examples.展开更多
Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes i...Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.展开更多
In this paper we present a CNN based approach for a real time 3 D-hand pose estimation from the depth sequence.Prior discriminative approaches have achieved remarkable success but are facing two main challenges:Firstl...In this paper we present a CNN based approach for a real time 3 D-hand pose estimation from the depth sequence.Prior discriminative approaches have achieved remarkable success but are facing two main challenges:Firstly,the methods are fully supervised hence require large numbers of annotated training data to extract the dynamic information from a hand representation.Secondly,unreliable hand detectors based on strong assumptions or a weak detector which often fail in several situations like complex environment and multiple hands.In contrast to these methods,this paper presents an approach that can be considered as semi-supervised by performing predictive coding of image sequences of hand poses in order to capture latent features underlying a given image without supervision.The hand is modelled using a novel latent tree dependency model(LDTM)which transforms internal joint location to an explicit representation.Then the modeled hand topology is integrated with the pose estimator using data dependent method to jointly learn latent variables of the posterior pose appearance and the pose configuration respectively.Finally,an unsupervised error term which is a part of the recurrent architecture ensures smooth estimations of the final pose.Experiments on three challenging public datasets,ICVL,MSRA,and NYU demonstrate the significant performance of the proposed method which is comparable or better than state-of-the-art approaches.展开更多
Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay ...Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay its onset.In this study,we proposed a Deep Dense Layer Neural Network(DDLNN)for diabetes prediction using a dataset with 768 instances and nine variables.We also applied a combination of classical machine learning(ML)algorithms and ensemble learning algorithms for the effective prediction of the disease.The classical ML algorithms used were Support Vector Machine(SVM),Logistic Regression(LR),Decision Tree(DT),K-Nearest Neighbor(KNN),and Naïve Bayes(NB).We also constructed ensemble models such as bagging(Random Forest)and boosting like AdaBoost and Extreme Gradient Boosting(XGBoost)to evaluate the performance of prediction models.The proposed DDLNN model and ensemble learning models were trained and tested using hyperparameter tuning and K-Fold cross-validation to determine the best parameters for predicting the disease.The combined ML models used majority voting to select the best outcomes among the models.The efficacy of the proposed and other models was evaluated for effective diabetes prediction.The investigation concluded that the proposed model,after hyperparameter tuning,outperformed other learning models with an accuracy of 84.42%,a precision of 85.12%,a recall rate of 65.40%,and a specificity of 94.11%.展开更多
The adaptive learning and prediction of a highly nonlinear and time-varying bioreactor benchmark process is studied using Neur-On-Line, a graphical tool kit for developing and deploying neural networks in the G2 real ...The adaptive learning and prediction of a highly nonlinear and time-varying bioreactor benchmark process is studied using Neur-On-Line, a graphical tool kit for developing and deploying neural networks in the G2 real time intelligent environment,and a new modified Broyden, Fletcher, Goldfarb, and Shanno (BFGS) quasi-Newton algorithm. The modified BFGS algorithm for the adaptive learning of back propagation (BP) neural networks is developed and embedded into NeurOn-Line by introducing a new search method of learning rate to the full memory BFGS algorithm. Simulation results show that the adaptive learning and prediction neural network system can quicklv track the time-varving and nonlinear behavior of the bioreactor.展开更多
文摘In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
基金supported by National Key Research & Development Program-Intergovernmental International Science and Technology Innovation Cooperation Project (2021YFE0112800)National Natural Science Foundation of China (Key Program: 62136003)+2 种基金National Natural Science Foundation of China (62073142)Fundamental Research Funds for the Central Universities (222202417006)Shanghai Al Lab
文摘The gasoline inline blending process has widely used real-time optimization techniques to achieve optimization objectives,such as minimizing the cost of production.However,the effectiveness of real-time optimization in gasoline blending relies on accurate blending models and is challenged by stochastic disturbances.Thus,we propose a real-time optimization algorithm based on the soft actor-critic(SAC)deep reinforcement learning strategy to optimize gasoline blending without relying on a single blending model and to be robust against disturbances.Our approach constructs the environment using nonlinear blending models and feedstocks with disturbances.The algorithm incorporates the Lagrange multiplier and path constraints in reward design to manage sparse product constraints.Carefully abstracted states facilitate algorithm convergence,and the normalized action vector in each optimization period allows the agent to generalize to some extent across different target production scenarios.Through these well-designed components,the algorithm based on the SAC outperforms real-time optimization methods based on either nonlinear or linear programming.It even demonstrates comparable performance with the time-horizon based real-time optimization method,which requires knowledge of uncertainty models,confirming its capability to handle uncertainty without accurate models.Our simulation illustrates a promising approach to free real-time optimization of the gasoline blending process from uncertainty models that are difficult to acquire in practice.
基金supported by the National Natural Science Foundation of China(Grant No.12072090)。
文摘This work proposes a recorded recurrent twin delayed deep deterministic(RRTD3)policy gradient algorithm to solve the challenge of constructing guidance laws for intercepting endoatmospheric maneuvering missiles with uncertainties and observation noise.The attack-defense engagement scenario is modeled as a partially observable Markov decision process(POMDP).Given the benefits of recurrent neural networks(RNNs)in processing sequence information,an RNN layer is incorporated into the agent’s policy network to alleviate the bottleneck of traditional deep reinforcement learning methods while dealing with POMDPs.The measurements from the interceptor’s seeker during each guidance cycle are combined into one sequence as the input to the policy network since the detection frequency of an interceptor is usually higher than its guidance frequency.During training,the hidden states of the RNN layer in the policy network are recorded to overcome the partially observable problem that this RNN layer causes inside the agent.The training curves show that the proposed RRTD3 successfully enhances data efficiency,training speed,and training stability.The test results confirm the advantages of the RRTD3-based guidance laws over some conventional guidance laws.
基金Supported by the China National Petroleum Corporation Limited-China University of Petroleum(Beijing)Strategic Cooperation Science and Technology Project(ZLZX2020-03).
文摘In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2023R809).
文摘Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering.
文摘The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to databit width. Reducing the data bit width will result in a loss of accuracy. Therefore, it is difficult to determinethe optimal bit width for different parts of the network with guaranteed accuracy. Mixed precision quantizationcan effectively reduce the amount of computation while keeping the model accuracy basically unchanged. In thispaper, a hardware-aware mixed precision quantization strategy optimal assignment algorithm adapted to low bitwidth is proposed, and reinforcement learning is used to automatically predict the mixed precision that meets theconstraints of hardware resources. In the state-space design, the standard deviation of weights is used to measurethe distribution difference of data, the execution speed feedback of simulated neural network accelerator inferenceis used as the environment to limit the action space of the agent, and the accuracy of the quantization model afterretraining is used as the reward function to guide the agent to carry out deep reinforcement learning training. Theexperimental results show that the proposed method obtains a suitable model layer-by-layer quantization strategyunder the condition that the computational resources are satisfied, and themodel accuracy is effectively improved.The proposed method has strong intelligence and certain universality and has strong application potential in thefield of mixed precision quantization and embedded neural network model deployment.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
基金Taif University Researchers Supporting Project Number(TURSP-2020/73)Taif University,Taif,Saudi Arabia.
文摘Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy.
基金Supported by the National 863 CIMS Project Foundation(863-511-010)Tianjin Natural Science Foundation(983602011)Backbone Young Teacher Project Foundation of Ministry of Education
文摘This paper describes the self—adjustment of some tuning-knobs of the generalized predictive controller(GPC).A three feedforward neural network was utilized to on line learn two key tuning-knobs of GPC,and BP algorithm was used for the training of the linking-weights of the neural network.Hence it gets rid of the difficulty of choosing these tuning-knobs manually and provides easier condition for the wide applications of GPC on industrial plants.Simulation results illustrated the effectiveness of the method.
文摘Reinforcement learning is an excellent approach which is used in artificial intelligence,automatic control, etc. However, ordinary reinforcement learning algorithm, such as Q-learning with lookup table cannot cope with extremely complex and dynamic environment due to the huge state space. To reduce the state space, modular neural network Q-learning algorithm is proposed, which combines Q-learning algorithm with neural network and module method. Forward feedback neural network, Elman neural network and radius-basis neural network are separately employed to construct such algorithm. It is revealed that Elman neural network Q-learning algorithm has the best performance under the condition that the same neural network training method, i.e. gradient descent error back-propagation algorithm is applied.
文摘Creating new molecules with desired properties is a fundamental and challenging problem in chemistry. Reinforcement learning (RL) has shown its utility in this area where the target chemical property values can serve as a reward signal. At each step of making a new molecule, the RL agent learns selecting an action from a list of many chemically valid actions for a given molecule, implying a great uncertainty associated with its learning. In a traditional implementation of deep RL algorithms, deterministic neural networks are typically employed, thus allowing the agent to choose one action from one sampled action at each step. In this paper, we proposed a new strategy of applying Bayesian neural networks to RL to reduce uncertainty so that the agent can choose one action from a pool of sampled actions at each step, and investigated its benefits in molecule design. Our experiments suggested the Bayesian approach could create molecules of desirable chemical quality while maintained their diversity, a very difficult goal to achieve in machine learning of molecules. We further exploited their diversity by using them to train a generative model to yield more novel drug-like molecules, which were absent in the training molecules as we know novelty is essential for drug candidate molecules. In conclusion, Bayesian approach could offer a balance between exploitation and exploration in RL, and a balance between optimization and diversity in molecule design.
基金Supported by Scientific and Technological Innovation Project of Chongqing(No.cstc2021jxjl20010)The Graduate Student Innovation Program of Chongqing University of Technology(No.clgycx-20203166,No.gzlcx20222061,No.gzlcx20223229)。
文摘Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task scheduling problem in the hierarchically deployed edge cloud.We first formulate the minimization of the service time of scheduled tasks in edge cloud as a combinatorial optimization problem,blue and then prove the NP-hardness of the problem.Different from the existing work that mostly designs heuristic approximation-based algorithms or policies to make scheduling decision,we propose a newly designed scheduling policy,named Joint Neural Network and Heuristic Scheduling(JNNHSP),which combines a neural network-based method with a heuristic based solution.JNNHSP takes the Sequence-to-Sequence(Seq2Seq)model trained by Reinforcement Learning(RL)as the primary policy and adopts the heuristic algorithm as the auxiliary policy to obtain the scheduling solution,thereby achieving a good balance between the quality and the efficiency of the scheduling solution.In-depth experiments show that compared with a variety of related policies and optimization solvers,JNNHSP can achieve better performance in terms of scheduling error ratio,the degree to which the policy is affected by re-sources limitations,average service latency,and execution efficiency in a typical hierarchical edge cloud.
基金This work was supported in part by the National Science Foundation(NSF-CSSI-2004766,NSF-PFI-2141084).
文摘This paper presents a machine-learning-based speedup strategy for real-time implementation of model-predictive-control(MPC)in emergency voltage stabilization of power systems.Despite success in various applications,real-time implementation of MPC in power systems has not been successful due to the online control computation time required for large-sized complex systems,and in power systems,the computation time exceeds the available decision time used in practice by a large extent.This long-standing problem is addressed here by developing a novel MPC-based framework that i)computes an optimal strategy for nominal loads in an offline setting and adapts it for real-time scenarios by successive online control corrections at each control instant utilizing the latest measurements,and ii)employs a machine-learning based approach for the prediction of voltage trajectory and its sensitivity to control inputs,thereby accelerating the overall control computation by multiple times.Additionally,a realistic control coordination scheme among static var compensators(SVC),load-shedding(LS),and load tap-changers(LTC)is presented that incorporates the practical delayed actions of the LTCs.The performance of the proposed scheme is validated for IEEE 9-bus and 39-bus systems,with±20%variations in nominal loading conditions together with contingencies.We show that our proposed methodology speeds up the online computation by 20-fold,bringing it down to a practically feasible value(fraction of a second),making the MPC real-time and feasible for power system control for the first time.
基金supported by Imperial College London,UK,King’s College London,UK and Engineering and Physical Sciences Research Council(EPSRC),UK.
文摘Reinforcement Learning(RL)based control algorithms can learn the control strategies for nonlinear and uncertain environment during interacting with it.Guided by the rewards generated by environment,a RL agent can learn the control strategy directly in a model-free way instead of investigating the dynamic model of the environment.In the paper,we propose the sampled-data RL control strategy to reduce the computational demand.In the sampled-data control strategy,the whole control system is of a hybrid structure,in which the plant is of continuous structure while the controller(RL agent)adopts a discrete structure.Given that the continuous states of the plant will be the input of the agent,the state–action value function is approximated by the fully connected feed-forward neural networks(FCFFNN).Instead of learning the controller at every step during the interaction with the environment,the learning and acting stages are decoupled to learn the control strategy more effectively through experience replay.In the acting stage,the most effective experience obtained during the interaction with the environment will be stored and during the learning stage,the stored experience will be replayed to customized times,which helps enhance the experience replay process.The effectiveness of proposed approach will be verified by simulation examples.
基金supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)+2 种基金JST Through the Establishment of University Fellowships Towards the Creation of Science Technology Innovation(JPMJFS2115)the National Natural Science Foundation of China(52078382)the State Key Laboratory of Disaster Reduction in Civil Engineering(CE19-A-01)。
文摘Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.
基金supported in part by the Fundamental Research Funds for the Central Universities(WK2350000002)。
文摘In this paper we present a CNN based approach for a real time 3 D-hand pose estimation from the depth sequence.Prior discriminative approaches have achieved remarkable success but are facing two main challenges:Firstly,the methods are fully supervised hence require large numbers of annotated training data to extract the dynamic information from a hand representation.Secondly,unreliable hand detectors based on strong assumptions or a weak detector which often fail in several situations like complex environment and multiple hands.In contrast to these methods,this paper presents an approach that can be considered as semi-supervised by performing predictive coding of image sequences of hand poses in order to capture latent features underlying a given image without supervision.The hand is modelled using a novel latent tree dependency model(LDTM)which transforms internal joint location to an explicit representation.Then the modeled hand topology is integrated with the pose estimator using data dependent method to jointly learn latent variables of the posterior pose appearance and the pose configuration respectively.Finally,an unsupervised error term which is a part of the recurrent architecture ensures smooth estimations of the final pose.Experiments on three challenging public datasets,ICVL,MSRA,and NYU demonstrate the significant performance of the proposed method which is comparable or better than state-of-the-art approaches.
文摘Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay its onset.In this study,we proposed a Deep Dense Layer Neural Network(DDLNN)for diabetes prediction using a dataset with 768 instances and nine variables.We also applied a combination of classical machine learning(ML)algorithms and ensemble learning algorithms for the effective prediction of the disease.The classical ML algorithms used were Support Vector Machine(SVM),Logistic Regression(LR),Decision Tree(DT),K-Nearest Neighbor(KNN),and Naïve Bayes(NB).We also constructed ensemble models such as bagging(Random Forest)and boosting like AdaBoost and Extreme Gradient Boosting(XGBoost)to evaluate the performance of prediction models.The proposed DDLNN model and ensemble learning models were trained and tested using hyperparameter tuning and K-Fold cross-validation to determine the best parameters for predicting the disease.The combined ML models used majority voting to select the best outcomes among the models.The efficacy of the proposed and other models was evaluated for effective diabetes prediction.The investigation concluded that the proposed model,after hyperparameter tuning,outperformed other learning models with an accuracy of 84.42%,a precision of 85.12%,a recall rate of 65.40%,and a specificity of 94.11%.
文摘The adaptive learning and prediction of a highly nonlinear and time-varying bioreactor benchmark process is studied using Neur-On-Line, a graphical tool kit for developing and deploying neural networks in the G2 real time intelligent environment,and a new modified Broyden, Fletcher, Goldfarb, and Shanno (BFGS) quasi-Newton algorithm. The modified BFGS algorithm for the adaptive learning of back propagation (BP) neural networks is developed and embedded into NeurOn-Line by introducing a new search method of learning rate to the full memory BFGS algorithm. Simulation results show that the adaptive learning and prediction neural network system can quicklv track the time-varving and nonlinear behavior of the bioreactor.