期刊文献+
共找到1,845篇文章
< 1 2 93 >
每页显示 20 50 100
Trading in Fast-ChangingMarkets withMeta-Reinforcement Learning
1
作者 Yutong Tian Minghan Gao +1 位作者 Qiang Gao Xiao-Hong Peng 《Intelligent Automation & Soft Computing》 2024年第2期175-188,共14页
How to find an effective trading policy is still an open question mainly due to the nonlinear and non-stationary dynamics in a financial market.Deep reinforcement learning,which has recently been used to develop tradi... How to find an effective trading policy is still an open question mainly due to the nonlinear and non-stationary dynamics in a financial market.Deep reinforcement learning,which has recently been used to develop trading strategies by automatically extracting complex features from a large amount of data,is struggling to deal with fastchanging markets due to sample inefficiency.This paper applies the meta-reinforcement learning method to tackle the trading challenges faced by conventional reinforcement learning(RL)approaches in non-stationary markets for the first time.In our work,the history trading data is divided into multiple task data and for each of these data themarket condition is relatively stationary.Then amodel agnosticmeta-learning(MAML)-based tradingmethod involving a meta-learner and a normal learner is proposed.A trading policy is learned by the meta-learner across multiple task data,which is then fine-tuned by the normal learner through a small amount of data from a new market task before trading in it.To improve the adaptability of the MAML-based method,an ordered multiplestep updating mechanism is also proposed to explore the changing dynamic within a task market.The simulation results demonstrate that the proposed MAML-based trading methods can increase the annualized return rate by approximately 180%,200%,and 160%,increase the Sharpe ratio by 180%,90%,and 170%,and decrease the maximum drawdown by 30%,20%,and 40%,compared to the traditional RL approach in three stock index future markets,respectively. 展开更多
关键词 Algorithmic trading reinforcement learning fast-changing market meta-reinforcement learning
下载PDF
Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications 被引量:2
2
作者 Ding Wang Ning Gao +2 位作者 Derong Liu Jinna Li Frank L.Lewis 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期18-36,共19页
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ... Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence. 展开更多
关键词 Adaptive dynamic programming(ADP) advanced control complex environment data-driven control event-triggered design intelligent control neural networks nonlinear systems optimal control reinforcement learning(RL)
下载PDF
UAV-Assisted Dynamic Avatar Task Migration for Vehicular Metaverse Services: A Multi-Agent Deep Reinforcement Learning Approach 被引量:1
3
作者 Jiawen Kang Junlong Chen +6 位作者 Minrui Xu Zehui Xiong Yutao Jiao Luchao Han Dusit Niyato Yongju Tong Shengli Xie 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期430-445,共16页
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers... Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses. 展开更多
关键词 AVATAR blockchain metaverses multi-agent deep reinforcement learning transformer UAVS
下载PDF
A deep reinforcement learning approach to gasoline blending real-time optimization under uncertainty
4
作者 Zhiwei Zhu Minglei Yang +3 位作者 Wangli He Renchu He Yunmeng Zhao Feng Qian 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第7期183-192,共10页
The gasoline inline blending process has widely used real-time optimization techniques to achieve optimization objectives,such as minimizing the cost of production.However,the effectiveness of real-time optimization i... The gasoline inline blending process has widely used real-time optimization techniques to achieve optimization objectives,such as minimizing the cost of production.However,the effectiveness of real-time optimization in gasoline blending relies on accurate blending models and is challenged by stochastic disturbances.Thus,we propose a real-time optimization algorithm based on the soft actor-critic(SAC)deep reinforcement learning strategy to optimize gasoline blending without relying on a single blending model and to be robust against disturbances.Our approach constructs the environment using nonlinear blending models and feedstocks with disturbances.The algorithm incorporates the Lagrange multiplier and path constraints in reward design to manage sparse product constraints.Carefully abstracted states facilitate algorithm convergence,and the normalized action vector in each optimization period allows the agent to generalize to some extent across different target production scenarios.Through these well-designed components,the algorithm based on the SAC outperforms real-time optimization methods based on either nonlinear or linear programming.It even demonstrates comparable performance with the time-horizon based real-time optimization method,which requires knowledge of uncertainty models,confirming its capability to handle uncertainty without accurate models.Our simulation illustrates a promising approach to free real-time optimization of the gasoline blending process from uncertainty models that are difficult to acquire in practice. 展开更多
关键词 Deep reinforcement learning Gasoline blending Real-time optimization PETROLEUM Computer simulation Neural networks
下载PDF
Stability behavior of the Lanxi ancient flood control levee after reinforcement with upside-down hanging wells and grouting curtain
5
作者 QIN Zipeng TIAN Yan +4 位作者 GAO Siyuan ZHOU Jianfen HE Xiaohui HE Weizhong GAO Jingquan 《Journal of Mountain Science》 SCIE CSCD 2024年第1期84-99,共16页
The stability of the ancient flood control levees is mainly influenced by water level fluctuations, groundwater concentration and rainfalls. This paper takes the Lanxi ancient levee as a research object to study the e... The stability of the ancient flood control levees is mainly influenced by water level fluctuations, groundwater concentration and rainfalls. This paper takes the Lanxi ancient levee as a research object to study the evolution laws of its seepage, displacement and stability before and after reinforcement with the upside-down hanging wells and grouting curtain through numerical simulation methods combined with experiments and observations. The study results indicate that the filled soil is less affected by water level fluctuations and groundwater concentration after reinforcement. A high groundwater level is detrimental to the levee's long-term stability, and the drainage issues need to be fully considered. The deformation of the reinforced levee is effectively controlled since the fill deformation is mainly borne by the upside-down hanging wells. The safety factors of the levee before reinforcement vary significantly with the water level. The minimum value of the safety factors is 0.886 during the water level decreasing period, indicating a very high risk of the instability. While it reached 1.478 after reinforcement, the stability of the ancient levee is improved by a large margin. 展开更多
关键词 Stability analysis Multiple factors Antiseepage reinforcement Upside-down hanging well Grouting curtain Ancient levee
下载PDF
Toward Trustworthy Decision-Making for Autonomous Vehicles:A Robust Reinforcement Learning Approach with Safety Guarantees
6
作者 Xiangkun He Wenhui Huang Chen Lv 《Engineering》 SCIE EI CAS CSCD 2024年第2期77-89,共13页
While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present... While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present a novel robust reinforcement learning approach with safety guarantees to attain trustworthy decision-making for autonomous vehicles.The proposed technique ensures decision trustworthiness in terms of policy robustness and collision safety.Specifically,an adversary model is learned online to simulate the worst-case uncertainty by approximating the optimal adversarial perturbations on the observed states and environmental dynamics.In addition,an adversarial robust actor-critic algorithm is developed to enable the agent to learn robust policies against perturbations in observations and dynamics.Moreover,we devise a safety mask to guarantee the collision safety of the autonomous driving agent during both the training and testing processes using an interpretable knowledge model known as the Responsibility-Sensitive Safety Model.Finally,the proposed approach is evaluated through both simulations and experiments.These results indicate that the autonomous driving agent can make trustworthy decisions and drastically reduce the number of collisions through robust safety policies. 展开更多
关键词 Autonomous vehicle DECISION-MAKING reinforcement learning Adversarial attack Safety guarantee
下载PDF
Reinforcement Learning-Based Energy Management for Hybrid Power Systems:State-of-the-Art Survey,Review,and Perspectives
7
作者 Xiaolin Tang Jiaxin Chen +4 位作者 Yechen Qin Teng Liu Kai Yang Amir Khajepour Shen Li 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第3期1-25,共25页
The new energy vehicle plays a crucial role in green transportation,and the energy management strategy of hybrid power systems is essential for ensuring energy-efficient driving.This paper presents a state-of-the-art ... The new energy vehicle plays a crucial role in green transportation,and the energy management strategy of hybrid power systems is essential for ensuring energy-efficient driving.This paper presents a state-of-the-art survey and review of reinforcement learning-based energy management strategies for hybrid power systems.Additionally,it envisions the outlook for autonomous intelligent hybrid electric vehicles,with reinforcement learning as the foundational technology.First of all,to provide a macro view of historical development,the brief history of deep learning,reinforcement learning,and deep reinforcement learning is presented in the form of a timeline.Then,the comprehensive survey and review are conducted by collecting papers from mainstream academic databases.Enumerating most of the contributions based on three main directions—algorithm innovation,powertrain innovation,and environment innovation—provides an objective review of the research status.Finally,to advance the application of reinforcement learning in autonomous intelligent hybrid electric vehicles,future research plans positioned as“Alpha HEV”are envisioned,integrating Autopilot and energy-saving control. 展开更多
关键词 New energy vehicle Hybrid power system reinforcement learning Energy management strategy
下载PDF
Cognitive interference decision method for air defense missile fuze based on reinforcement learning
8
作者 Dingkun Huang Xiaopeng Yan +2 位作者 Jian Dai Xinwei Wang Yangtian Liu 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第2期393-404,共12页
To solve the problem of the low interference success rate of air defense missile radio fuzes due to the unified interference form of the traditional fuze interference system,an interference decision method based Q-lea... To solve the problem of the low interference success rate of air defense missile radio fuzes due to the unified interference form of the traditional fuze interference system,an interference decision method based Q-learning algorithm is proposed.First,dividing the distance between the missile and the target into multiple states to increase the quantity of state spaces.Second,a multidimensional motion space is utilized,and the search range of which changes with the distance of the projectile,to select parameters and minimize the amount of ineffective interference parameters.The interference effect is determined by detecting whether the fuze signal disappears.Finally,a weighted reward function is used to determine the reward value based on the range state,output power,and parameter quantity information of the interference form.The effectiveness of the proposed method in selecting the range of motion space parameters and designing the discrimination degree of the reward function has been verified through offline experiments involving full-range missile rendezvous.The optimal interference form for each distance state has been obtained.Compared with the single-interference decision method,the proposed decision method can effectively improve the success rate of interference. 展开更多
关键词 Cognitive radio Interference decision Radio fuze reinforcement learning Interference strategy optimization
下载PDF
Recorded recurrent deep reinforcement learning guidance laws for intercepting endoatmospheric maneuvering missiles
9
作者 Xiaoqi Qiu Peng Lai +1 位作者 Changsheng Gao Wuxing Jing 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第1期457-470,共14页
This work proposes a recorded recurrent twin delayed deep deterministic(RRTD3)policy gradient algorithm to solve the challenge of constructing guidance laws for intercepting endoatmospheric maneuvering missiles with u... This work proposes a recorded recurrent twin delayed deep deterministic(RRTD3)policy gradient algorithm to solve the challenge of constructing guidance laws for intercepting endoatmospheric maneuvering missiles with uncertainties and observation noise.The attack-defense engagement scenario is modeled as a partially observable Markov decision process(POMDP).Given the benefits of recurrent neural networks(RNNs)in processing sequence information,an RNN layer is incorporated into the agent’s policy network to alleviate the bottleneck of traditional deep reinforcement learning methods while dealing with POMDPs.The measurements from the interceptor’s seeker during each guidance cycle are combined into one sequence as the input to the policy network since the detection frequency of an interceptor is usually higher than its guidance frequency.During training,the hidden states of the RNN layer in the policy network are recorded to overcome the partially observable problem that this RNN layer causes inside the agent.The training curves show that the proposed RRTD3 successfully enhances data efficiency,training speed,and training stability.The test results confirm the advantages of the RRTD3-based guidance laws over some conventional guidance laws. 展开更多
关键词 Endoatmospheric interception Missile guidance reinforcement learning Markov decision process Recurrent neural networks
下载PDF
Unleashing the Power of Multi-Agent Reinforcement Learning for Algorithmic Trading in the Digital Financial Frontier and Enterprise Information Systems
10
作者 Saket Sarin Sunil K.Singh +4 位作者 Sudhakar Kumar Shivam Goyal Brij Bhooshan Gupta Wadee Alhalabi Varsha Arya 《Computers, Materials & Continua》 SCIE EI 2024年第8期3123-3138,共16页
In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading... In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading.Our in-depth investigation delves into the intricacies of merging Multi-Agent Reinforcement Learning(MARL)and Explainable AI(XAI)within Fintech,aiming to refine Algorithmic Trading strategies.Through meticulous examination,we uncover the nuanced interactions of AI-driven agents as they collaborate and compete within the financial realm,employing sophisticated deep learning techniques to enhance the clarity and adaptability of trading decisions.These AI-infused Fintech platforms harness collective intelligence to unearth trends,mitigate risks,and provide tailored financial guidance,fostering benefits for individuals and enterprises navigating the digital landscape.Our research holds the potential to revolutionize finance,opening doors to fresh avenues for investment and asset management in the digital age.Additionally,our statistical evaluation yields encouraging results,with metrics such as Accuracy=0.85,Precision=0.88,and F1 Score=0.86,reaffirming the efficacy of our approach within Fintech and emphasizing its reliability and innovative prowess. 展开更多
关键词 Neurodynamic Fintech multi-agent reinforcement learning algorithmic trading digital financial frontier
下载PDF
Combining reinforcement learning with mathematical programming:An approach for optimal design of heat exchanger networks
11
作者 Hui Tan Xiaodong Hong +4 位作者 Zuwei Liao Jingyuan Sun Yao Yang Jingdai Wang Yongrong Yang 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第5期63-71,共9页
Heat integration is important for energy-saving in the process industry.It is linked to the persistently challenging task of optimal design of heat exchanger networks(HEN).Due to the inherent highly nonconvex nonlinea... Heat integration is important for energy-saving in the process industry.It is linked to the persistently challenging task of optimal design of heat exchanger networks(HEN).Due to the inherent highly nonconvex nonlinear and combinatorial nature of the HEN problem,it is not easy to find solutions of high quality for large-scale problems.The reinforcement learning(RL)method,which learns strategies through ongoing exploration and exploitation,reveals advantages in such area.However,due to the complexity of the HEN design problem,the RL method for HEN should be dedicated and designed.A hybrid strategy combining RL with mathematical programming is proposed to take better advantage of both methods.An insightful state representation of the HEN structure as well as a customized reward function is introduced.A Q-learning algorithm is applied to update the HEN structure using theε-greedy strategy.Better results are obtained from three literature cases of different scales. 展开更多
关键词 Heat exchanger network reinforcement learning Mathematical programming Process design
下载PDF
Resource Allocation for Cognitive Network Slicing in PD-SCMA System Based on Two-Way Deep Reinforcement Learning
12
作者 Zhang Zhenyu Zhang Yong +1 位作者 Yuan Siyu Cheng Zhenjie 《China Communications》 SCIE CSCD 2024年第6期53-68,共16页
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se... In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users. 展开更多
关键词 cognitive radio deep reinforcement learning network slicing power-domain non-orthogonal multiple access resource allocation
下载PDF
Numerical investigation of geostress influence on the grouting reinforcement effectiveness of tunnel surrounding rock mass in fault fracture zones
13
作者 Xiangyu Xu Zhijun Wu +3 位作者 Lei Weng Zhaofei Chu Quansheng Liu Yuan Zhou 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期81-101,共21页
Grouting is a widely used approach to reinforce broken surrounding rock mass during the construction of underground tunnels in fault fracture zones,and its reinforcement effectiveness is highly affected by geostress.I... Grouting is a widely used approach to reinforce broken surrounding rock mass during the construction of underground tunnels in fault fracture zones,and its reinforcement effectiveness is highly affected by geostress.In this study,a numerical manifold method(NMM)based simulator has been developed to examine the impact of geostress conditions on grouting reinforcement during tunnel excavation.To develop this simulator,a detection technique for identifying slurry migration channels and an improved fluid-solid coupling(FeS)framework,which considers the influence of fracture properties and geostress states,is developed and incorporated into a zero-thickness cohesive element(ZE)based NMM(Co-NMM)for simulating tunnel excavation.Additionally,to simulate coagulation of injected slurry,a bonding repair algorithm is further proposed based on the ZE model.To verify the accuracy of the proposed simulator,a series of simulations about slurry migration in single fractures and fracture networks are numerically reproduced,and the results align well with analytical and laboratory test results.Furthermore,these numerical results show that neglecting the influence of geostress condition can lead to a serious over-estimation of slurry migration range and reinforcement effectiveness.After validations,a series of simulations about tunnel grouting reinforcement and tunnel excavation in fault fracture zones with varying fracture densities under different geostress conditions are conducted.Based on these simula-tions,the influence of geostress conditions and the optimization of grouting schemes are discussed. 展开更多
关键词 Numerical manifold method(NMM) Grouting reinforcement Geostress condition Fault fracture zone Tunnel excavation
下载PDF
Preparation and Reinforcement Adaptability of Jute Fiber Reinforced Magnesium Phosphate Cement Based Composite Materials
14
作者 刘芯州 郭远臣 +3 位作者 WANG Rui XIANG Kai WANG Xue YE Qing 《Journal of Wuhan University of Technology(Materials Science)》 SCIE EI CAS CSCD 2024年第4期999-1009,共11页
To improve the brittleness characteristics of magnesium phosphate cement-based materials(MPC)and to promote its promotion and application in the field of structural reinforcement and repair,this study aimed to increas... To improve the brittleness characteristics of magnesium phosphate cement-based materials(MPC)and to promote its promotion and application in the field of structural reinforcement and repair,this study aimed to increase the toughness of MPC by adding jute fiber,explore the effects of different amounts of jute fiber on the working and mechanical properties of MPC,and prepare jute fiber reinforced magnesium phosphate cement-based materials(JFRMPC)to reinforce damaged beams.The improvement effect of beam performance before and after reinforcement was compared,and the strengthening and toughening mechanisms of jute fiber on MPC were explored through microscopic analysis.The experimental results show that,as the content of jute fiber(JF)increases,the fluidity and setting time of MPC decrease continuously;When the content of jute fiber is 0.8%,the compressive strength,flexural strength,and bonding strength of MPC at 28 days reach their maximum values,which are increased by 18.0%,20.5%,and 22.6%compared to those of M0,respectively.The beam strengthened with JFRMPC can withstand greater deformation,with a deflection of 2.3 times that of the unreinforced beam at failure.The strain of the steel bar is greatly reduced,and the initial crack and failure loads of the reinforced beam are increased by 192.1%and 16.1%,respectively,compared to those of the unreinforced beam.The JF added to the MPC matrix dissipates energy through tensile fracture and debonding pull-out,slowing down stress concentration and inhibiting the free development of cracks in the matrix,enabling JFRMPC to exhibit higher strength and better toughness.The JF does not cause the hydration of MPC to generate new compounds but reduces the amount of hydration products generated. 展开更多
关键词 magnesium phosphate cement jute fiber reinforcement of damaged beam flexural behavior
下载PDF
Reinforcement learning based edge computing in B5G
15
作者 Jiachen Yang Yiwen Sun +4 位作者 Yutian Lei Zhuo Zhang Yang Li Yongjun Bao Zhihan Lv 《Digital Communications and Networks》 SCIE CSCD 2024年第1期1-6,共6页
The development of communication technology will promote the application of Internet of Things,and Beyond 5G will become a new technology promoter.At the same time,Beyond 5G will become one of the important supports f... The development of communication technology will promote the application of Internet of Things,and Beyond 5G will become a new technology promoter.At the same time,Beyond 5G will become one of the important supports for the development of edge computing technology.This paper proposes a communication task allocation algorithm based on deep reinforcement learning for vehicle-to-pedestrian communication scenarios in edge computing.Through trial and error learning of agent,the optimal spectrum and power can be determined for transmission without global information,so as to balance the communication between vehicle-to-pedestrian and vehicle-to-infrastructure.The results show that the agent can effectively improve vehicle-to-infrastructure communication rate as well as meeting the delay constraints on the vehicle-to-pedestrian link. 展开更多
关键词 reinforcement learning Edge computing Beyond 5G Vehicle-to-pedestrian
下载PDF
Constrained Multi-Objective Optimization With Deep Reinforcement Learning Assisted Operator Selection
16
作者 Fei Ming Wenyin Gong +1 位作者 Ling Wang Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期919-931,共13页
Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been dev... Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been developed with the use of different algorithmic strategies,evolutionary operators,and constraint-handling techniques.The performance of CMOEAs may be heavily dependent on the operators used,however,it is usually difficult to select suitable operators for the problem at hand.Hence,improving operator selection is promising and necessary for CMOEAs.This work proposes an online operator selection framework assisted by Deep Reinforcement Learning.The dynamics of the population,including convergence,diversity,and feasibility,are regarded as the state;the candidate operators are considered as actions;and the improvement of the population state is treated as the reward.By using a Q-network to learn a policy to estimate the Q-values of all actions,the proposed approach can adaptively select an operator that maximizes the improvement of the population according to the current state and thereby improve the algorithmic performance.The framework is embedded into four popular CMOEAs and assessed on 42 benchmark problems.The experimental results reveal that the proposed Deep Reinforcement Learning-assisted operator selection significantly improves the performance of these CMOEAs and the resulting algorithm obtains better versatility compared to nine state-of-the-art CMOEAs. 展开更多
关键词 Constrained multi-objective optimization deep Qlearning deep reinforcement learning(DRL) evolutionary algorithms evolutionary operator selection
下载PDF
Distributed Graph Database Load Balancing Method Based on Deep Reinforcement Learning
17
作者 Shuming Sha Naiwang Guo +1 位作者 Wang Luo Yong Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期5105-5124,共20页
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci... This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems. 展开更多
关键词 reinforcement learning WORKFLOW task scheduling load balancing
下载PDF
Numerical Simulation of Surrounding Rock Deformation and Grouting Reinforcement of Cross-Fault Tunnel under Different Excavation Methods
18
作者 Duan Zhu Zhende Zhu +2 位作者 Cong Zhang LunDai Baotian Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2445-2470,共26页
Tunnel construction is susceptible to accidents such as loosening, deformation, collapse, and water inrush, especiallyunder complex geological conditions like dense fault areas. These accidents can cause instability a... Tunnel construction is susceptible to accidents such as loosening, deformation, collapse, and water inrush, especiallyunder complex geological conditions like dense fault areas. These accidents can cause instability and damageto the tunnel. As a result, it is essential to conduct research on tunnel construction and grouting reinforcementtechnology in fault fracture zones to address these issues and ensure the safety of tunnel excavation projects. Thisstudy utilized the Xianglushan cross-fault tunnel to conduct a comprehensive analysis on the construction, support,and reinforcement of a tunnel crossing a fault fracture zone using the three-dimensional finite element numericalmethod. The study yielded the following research conclusions: The excavation conditions of the cross-fault tunnelarray were analyzed to determine the optimal construction method for excavation while controlling deformationand stress in the surrounding rock. The middle partition method (CD method) was found to be the most suitable.Additionally, the effects of advanced reinforcement grouting on the cross-fault fracture zone tunnel were studied,and the optimal combination of grouting reinforcement range (140°) and grouting thickness (1m) was determined.The stress and deformation data obtained fromon-site monitoring of the surrounding rock was slightly lower thanthe numerical simulation results. However, the change trend of both sets of data was found to be consistent. Theseresearch findings provide technical analysis and data support for the construction and design of cross-fault tunnels. 展开更多
关键词 Cross-fault tunnel finite element analysis excavation methods surrounding rock deformation grouting reinforcement
下载PDF
Reinforcement learning based adaptive control for uncertain mechanical systems with asymptotic tracking
19
作者 Xiang-long Liang Zhi-kai Yao +1 位作者 Yao-wen Ge Jian-yong Yao 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第4期19-28,共10页
This paper mainly focuses on the development of a learning-based controller for a class of uncertain mechanical systems modeled by the Euler-Lagrange formulation.The considered system can depict the behavior of a larg... This paper mainly focuses on the development of a learning-based controller for a class of uncertain mechanical systems modeled by the Euler-Lagrange formulation.The considered system can depict the behavior of a large class of engineering systems,such as vehicular systems,robot manipulators and satellites.All these systems are often characterized by highly nonlinear characteristics,heavy modeling uncertainties and unknown perturbations,therefore,accurate-model-based nonlinear control approaches become unavailable.Motivated by the challenge,a reinforcement learning(RL)adaptive control methodology based on the actor-critic framework is investigated to compensate the uncertain mechanical dynamics.The approximation inaccuracies caused by RL and the exogenous unknown disturbances are circumvented via a continuous robust integral of the sign of the error(RISE)control approach.Different from a classical RISE control law,a tanh(·)function is utilized instead of a sign(·)function to acquire a more smooth control signal.The developed controller requires very little prior knowledge of the dynamic model,is robust to unknown dynamics and exogenous disturbances,and can achieve asymptotic output tracking.Eventually,co-simulations through ADAMS and MATLAB/Simulink on a three degrees-of-freedom(3-DOF)manipulator and experiments on a real-time electromechanical servo system are performed to verify the performance of the proposed approach. 展开更多
关键词 Adaptive control reinforcement learning Uncertain mechanical systems Asymptotic tracking
下载PDF
Regional Multi-Agent Cooperative Reinforcement Learning for City-Level Traffic Grid Signal
20
作者 Yisha Li Ya Zhang +1 位作者 Xinde Li Changyin Sun 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第9期1987-1998,共12页
This article studies the effective traffic signal control problem of multiple intersections in a city-level traffic system.A novel regional multi-agent cooperative reinforcement learning algorithm called RegionSTLight... This article studies the effective traffic signal control problem of multiple intersections in a city-level traffic system.A novel regional multi-agent cooperative reinforcement learning algorithm called RegionSTLight is proposed to improve the traffic efficiency.Firstly a regional multi-agent Q-learning framework is proposed,which can equivalently decompose the global Q value of the traffic system into the local values of several regions Based on the framework and the idea of human-machine cooperation,a dynamic zoning method is designed to divide the traffic network into several strong-coupled regions according to realtime traffic flow densities.In order to achieve better cooperation inside each region,a lightweight spatio-temporal fusion feature extraction network is designed.The experiments in synthetic real-world and city-level scenarios show that the proposed RegionS TLight converges more quickly,is more stable,and obtains better asymptotic performance compared to state-of-theart models. 展开更多
关键词 Human-machine cooperation mixed domain attention mechanism multi-agent reinforcement learning spatio-temporal feature traffic signal control
下载PDF
上一页 1 2 93 下一页 到第
使用帮助 返回顶部