期刊文献+
共找到87,411篇文章
< 1 2 250 >
每页显示 20 50 100
Unraveling the significance of cobalt on transformation kinetics,crystallography and impact toughness in high-strength steels
1
作者 Yishuang Yu Jingxiao Zhao +3 位作者 Xuelin Wang Hui Guo Zhenjia Xie Chengjia Shang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS 2025年第2期380-390,共11页
This work reveals the significant effects of cobalt(Co)on the microstructure and impact toughness of as-quenched highstrength steels by experimental characterizations and thermo-kinetic analyses.The results show that ... This work reveals the significant effects of cobalt(Co)on the microstructure and impact toughness of as-quenched highstrength steels by experimental characterizations and thermo-kinetic analyses.The results show that the Co-bearing steel exhibits finer blocks and a lower ductile-brittle transition temperature than the steel without Co.Moreover,the Co-bearing steel reveals higher transformation rates at the intermediate stage with bainite volume fraction ranging from around 0.1 to 0.6.The improved impact toughness of the Co-bearing steel results from the higher dense block boundaries dominated by the V1/V2 variant pair.Furthermore,the addition of Co induces a larger transformation driving force and a lower bainite start temperature(BS),thereby contributing to the refinement of blocks and the increase of the V1/V2 variant pair.These findings would be instructive for the composition,microstructure design,and property optimization of high-strength steels. 展开更多
关键词 high-strength steel COBALT transformation kinetics CRYSTALLOGRAPHY impact toughness
下载PDF
Enhancement of bending toughness for Fe-based amorphous nanocrystalline alloy with deep cryogenic-cycling treatment
2
作者 Yi-ran Zhang Dong Yang +5 位作者 Qing-chun Xiang Hong-yu Liu Jing Pang Ying-lei Ren Xiao-yu Li Ke-qiang Qiu 《China Foundry》 2025年第1期99-107,共9页
The effects of deep cryogenic-cycling treatment(DCT)on the mechanical properties,soft magnetic properties,and atomic scale structure of the Fe_(73.5)Si_(13.5)B_(9)Nb_(3)Cu_(1)amorphous nanocrystalline alloy were inves... The effects of deep cryogenic-cycling treatment(DCT)on the mechanical properties,soft magnetic properties,and atomic scale structure of the Fe_(73.5)Si_(13.5)B_(9)Nb_(3)Cu_(1)amorphous nanocrystalline alloy were investigated.The DCT samples were obtained by subjecting the as-annealed samples to a thermal cycling process between the temperature of the supercooled liquid zone and the temperature of liquid nitrogen.Through flat plate bending testing,hardness measurements,and nanoindentation experiment,it is found that the bending toughness of the DCT samples is improved and the soft magnetic properties are also slightly enhanced.These are attributed to the rejuvenation behavior of the DCT samples,which demonstrate a higher enthalpy of relaxation.Therefore,DCT is an effective method to enhance the bending toughness of Fe-based amorphous nanocrystalline alloys without degrading the soft magnetic properties. 展开更多
关键词 deep cryogenic-cycling treatment Fe-based amorphous nanocrystalline alloy bending toughness REJUVENATION
下载PDF
Performance Evaluation of Damaged T-Beam Bridges with External Prestressing Reinforcement Based on Natural Frequencies
3
作者 Menghui Hao Shanshan Zhou +4 位作者 Yongchao Han Zhanwei Zhu Qiang Yang Panxu Sun Jiajun Fan 《Structural Durability & Health Monitoring》 2025年第2期399-415,共17页
As an evaluation index,the natural frequency has the advantages of easy acquisition and quantitative evaluation.In this paper,the natural frequency is used to evaluate the performance of external cable reinforced brid... As an evaluation index,the natural frequency has the advantages of easy acquisition and quantitative evaluation.In this paper,the natural frequency is used to evaluate the performance of external cable reinforced bridges.Numerical examples show that compared with the natural frequencies of first-order modes,the natural frequencies of higher-order modes are more sensitive and can reflect the damage situation and external cable reinforcement effect of T-beam bridges.For damaged bridges,as the damage to the T-beam increases,the natural frequency value of the bridge gradually decreases.When the degree of local damage to the beam reaches 60%,the amplitude of natural frequency change exceeds 10%for the first time.The natural frequencies of the firstorder vibration mode and higher-order vibration mode can be selected as indexes for different degrees of the damaged T-beam bridges.For damaged bridges reinforced with external cables,the traditional natural frequency of the first-order vibration mode cannot be used as the index,which is insensitive to changes in prestress of the external cable.Some natural frequencies of higher-order vibration modes can be selected as indexes,which can reflect the reinforcement effect of externally prestressed damaged T-beam bridges,and its numerical value increases with the increase of external prestressed cable force. 展开更多
关键词 Performance evaluation natural frequency T-beam bridge DAMAGE external cable reinforcement
下载PDF
Optimized reinforcement of granite residual soil using a cement and alkaline solution: A coupling effect
4
作者 Bingxiang Yuan Jingkang Liang +5 位作者 Baifa Zhang Weijie Chen Xianlun Huang Qingyu Huang Yun Li Peng Yuan 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第1期509-523,共15页
Granite residual soil (GRS) is a type of weathering soil that can decompose upon contact with water, potentially causing geological hazards. In this study, cement, an alkaline solution, and glass fiber were used to re... Granite residual soil (GRS) is a type of weathering soil that can decompose upon contact with water, potentially causing geological hazards. In this study, cement, an alkaline solution, and glass fiber were used to reinforce GRS. The effects of cement content and SiO_(2)/Na2O ratio of the alkaline solution on the static and dynamic strengths of GRS were discussed. Microscopically, the reinforcement mechanism and coupling effect were examined using X-ray diffraction (XRD), micro-computed tomography (micro-CT), and scanning electron microscopy (SEM). The results indicated that the addition of 2% cement and an alkaline solution with an SiO_(2)/Na2O ratio of 0.5 led to the densest matrix, lowest porosity, and highest static compressive strength, which was 4994 kPa with a dynamic impact resistance of 75.4 kN after adding glass fiber. The compressive strength and dynamic impact resistance were a result of the coupling effect of cement hydration, a pozzolanic reaction of clay minerals in the GRS, and the alkali activation of clay minerals. Excessive cement addition or an excessively high SiO_(2)/Na2O ratio in the alkaline solution can have negative effects, such as the destruction of C-(A)-S-H gels by the alkaline solution and hindering the production of N-A-S-H gels. This can result in damage to the matrix of reinforced GRS, leading to a decrease in both static and dynamic strengths. This study suggests that further research is required to gain a more precise understanding of the effects of this mixture in terms of reducing our carbon footprint and optimizing its properties. The findings indicate that cement and alkaline solution are appropriate for GRS and that the reinforced GRS can be used for high-strength foundation and embankment construction. The study provides an analysis of strategies for mitigating and managing GRS slope failures, as well as enhancing roadbed performance. 展开更多
关键词 Granite residue soil(GRS) reinforcement Coupling effect Alkali activation Mechanical properties
下载PDF
Sharp Isolated Toughness Bound for Fractional(k,m)-Deleted Graphs
5
作者 Wei GAO Wei-fan WANG Yao-jun CHEN 《Acta Mathematicae Applicatae Sinica》 2025年第1期252-269,共18页
A graph G is a fractional(k;m)-deleted graph if removing any m edges from G,the resulting subgraph still admits a fractional k-factor.Let k≥2 and m≥1 be integers.Denote[2m/k]^(*)=[2m/k]if 2m/k is not an integer,and[... A graph G is a fractional(k;m)-deleted graph if removing any m edges from G,the resulting subgraph still admits a fractional k-factor.Let k≥2 and m≥1 be integers.Denote[2m/k]^(*)=[2m/k]if 2m/k is not an integer,and[2m/k]^(*)=[2m/k]-1 if 2m/k is an integer.In this paper,we prove that G is a fractional(k,m)-deleted graph if δ(G)≥k+m and isolated toughness meets I(G)>{3-1/m,if k=2 and m≥3,k+[2m/k]^(*)m+1-[2m/k]^(*);otherwise.Furthermore,we show that the isolated toughness bound is tight. 展开更多
关键词 GRAPH isolated toughness fractional k-factor fractional(k m)-deleted graph
原文传递
Combining deep reinforcement learning with heuristics to solve the traveling salesman problem
6
作者 Li Hong Yu Liu +1 位作者 Mengqiao Xu Wenhui Deng 《Chinese Physics B》 2025年第1期96-106,共11页
Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs... Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%. 展开更多
关键词 traveling salesman problem deep reinforcement learning simulated annealing algorithm transformer model whale optimization algorithm
下载PDF
An extended discontinuous deformation analysis for simulation of grouting reinforcement in a water-rich fractured rock tunnel
7
作者 Jingyao Gao Siyu Peng +1 位作者 Guangqi Chen Hongyun Fan 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第1期168-186,共19页
Grouting has been the most effective approach to mitigate water inrush disasters in underground engineering due to its ability to plug groundwater and enhance rock strength.Nevertheless,there is a lack of potent numer... Grouting has been the most effective approach to mitigate water inrush disasters in underground engineering due to its ability to plug groundwater and enhance rock strength.Nevertheless,there is a lack of potent numerical tools for assessing the grouting effectiveness in water-rich fractured strata.In this study,the hydro-mechanical coupled discontinuous deformation analysis(HM-DDA)is inaugurally extended to simulate the grouting process in a water-rich discrete fracture network(DFN),including the slurry migration,fracture dilation,water plugging in a seepage field,and joint reinforcement after coagulation.To validate the capabilities of the developed method,several numerical examples are conducted incorporating the Newtonian fluid and Bingham slurry.The simulation results closely align with the analytical solutions.Additionally,a set of compression tests is conducted on the fresh and grouted rock specimens to verify the reinforcement method and calibrate the rational properties of reinforced joints.An engineering-scale model based on a real water inrush case of the Yonglian tunnel in a water-rich fractured zone has been established.The model demonstrates the effectiveness of grouting reinforcement in mitigating water inrush disaster.The results indicate that increased grouting pressure greatly affects the regulation of water outflow from the tunnel face and the prevention of rock detachment face after excavation. 展开更多
关键词 Discontinuous deformation analysis(DDA) Water-rich fractured rock tunnel Grouting reinforcement Water inrush disaster
下载PDF
Deep reinforcement learning based integrated evasion and impact hierarchical intelligent policy of exo-atmospheric vehicles
8
作者 Leliang REN Weilin GUO +3 位作者 Yong XIAN Zhenyu LIU Daqiao ZHANG Shaopeng LI 《Chinese Journal of Aeronautics》 2025年第1期409-426,共18页
Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision u... Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision under multi-constraint conditions,a hierarchical intelligent decision-making method based on Deep Reinforcement Learning(DRL)was proposed.First,an intelligent decision-making framework of“DRL evasion decision”+“impact prediction guidance decision”was established:it takes the impact point deviation correction ability as the constraint and the maximum miss distance as the objective,and effectively solves the problem of poor decisionmaking effect caused by the large IEI decision space.Second,to solve the sparse reward problem faced by evasion decision-making,a hierarchical decision-making method consisting of maneuver timing decision and maneuver duration decision was proposed,and the corresponding Markov Decision Process(MDP)was designed.A detailed simulation experiment was designed to analyze the advantages and computational complexity of the proposed method.Simulation results show that the proposed model has good performance and low computational resource requirement.The minimum miss distance is 21.3 m under the condition of guaranteeing the impact point accuracy,and the single decision-making time is 4.086 ms on an STM32F407 single-chip microcomputer,which has engineering application value. 展开更多
关键词 Exo-atmospheric vehicle Integrated evasion and impact Deep reinforcement learning Hierarchical intelligent policy Single-chip microcomputer Miss distance
原文传递
Trading in Fast-ChangingMarkets withMeta-Reinforcement Learning
9
作者 Yutong Tian Minghan Gao +1 位作者 Qiang Gao Xiao-Hong Peng 《Intelligent Automation & Soft Computing》 2024年第2期175-188,共14页
How to find an effective trading policy is still an open question mainly due to the nonlinear and non-stationary dynamics in a financial market.Deep reinforcement learning,which has recently been used to develop tradi... How to find an effective trading policy is still an open question mainly due to the nonlinear and non-stationary dynamics in a financial market.Deep reinforcement learning,which has recently been used to develop trading strategies by automatically extracting complex features from a large amount of data,is struggling to deal with fastchanging markets due to sample inefficiency.This paper applies the meta-reinforcement learning method to tackle the trading challenges faced by conventional reinforcement learning(RL)approaches in non-stationary markets for the first time.In our work,the history trading data is divided into multiple task data and for each of these data themarket condition is relatively stationary.Then amodel agnosticmeta-learning(MAML)-based tradingmethod involving a meta-learner and a normal learner is proposed.A trading policy is learned by the meta-learner across multiple task data,which is then fine-tuned by the normal learner through a small amount of data from a new market task before trading in it.To improve the adaptability of the MAML-based method,an ordered multiplestep updating mechanism is also proposed to explore the changing dynamic within a task market.The simulation results demonstrate that the proposed MAML-based trading methods can increase the annualized return rate by approximately 180%,200%,and 160%,increase the Sharpe ratio by 180%,90%,and 170%,and decrease the maximum drawdown by 30%,20%,and 40%,compared to the traditional RL approach in three stock index future markets,respectively. 展开更多
关键词 Algorithmic trading reinforcement learning fast-changing market meta-reinforcement learning
下载PDF
UAV-Assisted Dynamic Avatar Task Migration for Vehicular Metaverse Services: A Multi-Agent Deep Reinforcement Learning Approach 被引量:1
10
作者 Jiawen Kang Junlong Chen +6 位作者 Minrui Xu Zehui Xiong Yutao Jiao Luchao Han Dusit Niyato Yongju Tong Shengli Xie 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期430-445,共16页
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers... Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses. 展开更多
关键词 AVATAR blockchain metaverses multi-agent deep reinforcement learning transformer UAVS
下载PDF
Quafu-RL:The cloud quantum computers based quantum reinforcement learning 被引量:1
11
作者 靳羽欣 许宏泽 +29 位作者 王正安 庄伟峰 黄凯旋 时运豪 马卫国 李天铭 陈驰通 许凯 冯玉龙 刘培 陈墨 李尚书 杨智鹏 钱辰 马运恒 肖骁 钱鹏 顾炎武 柴绪丹 普亚南 张翼鹏 魏世杰 曾进峰 李行 龙桂鲁 金贻荣 于海峰 范桁 刘东 胡孟军 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期29-34,共6页
With the rapid advancement of quantum computing,hybrid quantum–classical machine learning has shown numerous potential applications at the current stage,with expectations of being achievable in the noisy intermediate... With the rapid advancement of quantum computing,hybrid quantum–classical machine learning has shown numerous potential applications at the current stage,with expectations of being achievable in the noisy intermediate-scale quantum(NISQ)era.Quantum reinforcement learning,as an indispensable study,has recently demonstrated its ability to solve standard benchmark environments with formally provable theoretical advantages over classical counterparts.However,despite the progress of quantum processors and the emergence of quantum computing clouds,implementing quantum reinforcement learning algorithms utilizing parameterized quantum circuits(PQCs)on NISQ devices remains infrequent.In this work,we take the first step towards executing benchmark quantum reinforcement problems on real devices equipped with at most 136 qubits on the BAQIS Quafu quantum computing cloud.The experimental results demonstrate that the policy agents can successfully accomplish objectives under modified conditions in both the training and inference phases.Moreover,we design hardware-efficient PQC architectures in the quantum model using a multi-objective evolutionary algorithm and develop a learning algorithm that is adaptable to quantum devices.We hope that the Quafu-RL can be a guiding example to show how to realize machine learning tasks by taking advantage of quantum computers on the quantum cloud platform. 展开更多
关键词 quantum cloud platform quantum reinforcement learning evolutionary quantum architecture search
下载PDF
Cognitive interference decision method for air defense missile fuze based on reinforcement learning 被引量:1
12
作者 Dingkun Huang Xiaopeng Yan +2 位作者 Jian Dai Xinwei Wang Yangtian Liu 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第2期393-404,共12页
To solve the problem of the low interference success rate of air defense missile radio fuzes due to the unified interference form of the traditional fuze interference system,an interference decision method based Q-lea... To solve the problem of the low interference success rate of air defense missile radio fuzes due to the unified interference form of the traditional fuze interference system,an interference decision method based Q-learning algorithm is proposed.First,dividing the distance between the missile and the target into multiple states to increase the quantity of state spaces.Second,a multidimensional motion space is utilized,and the search range of which changes with the distance of the projectile,to select parameters and minimize the amount of ineffective interference parameters.The interference effect is determined by detecting whether the fuze signal disappears.Finally,a weighted reward function is used to determine the reward value based on the range state,output power,and parameter quantity information of the interference form.The effectiveness of the proposed method in selecting the range of motion space parameters and designing the discrimination degree of the reward function has been verified through offline experiments involving full-range missile rendezvous.The optimal interference form for each distance state has been obtained.Compared with the single-interference decision method,the proposed decision method can effectively improve the success rate of interference. 展开更多
关键词 Cognitive radio Interference decision Radio fuze reinforcement learning Interference strategy optimization
下载PDF
Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications 被引量:4
13
作者 Ding Wang Ning Gao +2 位作者 Derong Liu Jinna Li Frank L.Lewis 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期18-36,共19页
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ... Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence. 展开更多
关键词 Adaptive dynamic programming(ADP) advanced control complex environment data-driven control event-triggered design intelligent control neural networks nonlinear systems optimal control reinforcement learning(RL)
下载PDF
Bridge Bidding via Deep Reinforcement Learning and Belief Monte Carlo Search
14
作者 Zizhang Qiu Shouguang Wang +1 位作者 Dan You MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第10期2111-2122,共12页
Contract Bridge,a four-player imperfect information game,comprises two phases:bidding and playing.While computer programs excel at playing,bidding presents a challenging aspect due to the need for information exchange... Contract Bridge,a four-player imperfect information game,comprises two phases:bidding and playing.While computer programs excel at playing,bidding presents a challenging aspect due to the need for information exchange with partners and interference with communication of opponents.In this work,we introduce a Bridge bidding agent that combines supervised learning,deep reinforcement learning via self-play,and a test-time search approach.Our experiments demonstrate that our agent outperforms WBridge5,a highly regarded computer Bridge software that has won multiple world championships,by a performance of 0.98 IMPs(international match points)per deal over 10000 deals,with a much cost-effective approach.The performance significantly surpasses previous state-of-the-art(0.85 IMPs per deal).Note 0.1 IMPs per deal is a significant improvement in Bridge bidding. 展开更多
关键词 Contract Bridge reinforcement learning SEARCH
下载PDF
Distributed Graph Database Load Balancing Method Based on Deep Reinforcement Learning
15
作者 Shuming Sha Naiwang Guo +1 位作者 Wang Luo Yong Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期5105-5124,共20页
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci... This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems. 展开更多
关键词 reinforcement learning WORKFLOW task scheduling load balancing
下载PDF
Nano-scale Reinforcements and Properties of Al-Si-Cu Alloy Processed by High-Pressure Torsion
16
作者 DONG Ying WU Siyuan +4 位作者 HE Ziyang LIANG Chen CHENG Feng HE Zuwei QIAN Chenhao 《Journal of Wuhan University of Technology(Materials Science)》 SCIE EI CAS CSCD 2024年第5期1253-1259,共7页
To improve the comprehensive mechanical properties of Al-Si-Cu alloy,it was treated by a high-pressure torsion process,and the effect of the deformation degree on the microstructure and properties of the Al-Si-Cu allo... To improve the comprehensive mechanical properties of Al-Si-Cu alloy,it was treated by a high-pressure torsion process,and the effect of the deformation degree on the microstructure and properties of the Al-Si-Cu alloy was studied.The results show that the reinforcements(β-Si andθ-CuAl_(2)phases)of the Al-Si-Cu alloy are dispersed in theα-Al matrix phase with finer phase size after the treatment.The processed samples exhibit grain sizes in the submicron or even nanometer range,which effectively improves the mechanical properties of the material.The hardness and strength of the deformed alloy are both significantly raised to 268 HV and 390.04 MPa by 10 turns HPT process,and the fracture morphology shows that the material gradually transits from brittle to plastic before and after deformation.The elements interdiffusion at the interface between the phases has also been effectively enhanced.In addition,it is found that the severe plastic deformation at room temperature induces a ternary eutectic reaction,resulting in the formation of ternary Al+Si+CuAl_(2)eutectic. 展开更多
关键词 Al-Si-Cu alloy high-pressure torsion nano-scale reinforcements ternary eutectic
下载PDF
QoS Routing Optimization Based on Deep Reinforcement Learning in SDN
17
作者 Yu Song Xusheng Qian +2 位作者 Nan Zhang Wei Wang Ao Xiong 《Computers, Materials & Continua》 SCIE EI 2024年第5期3007-3021,共15页
To enhance the efficiency and expediency of issuing e-licenses within the power sector, we must confront thechallenge of managing the surging demand for data traffic. Within this realm, the network imposes stringentQu... To enhance the efficiency and expediency of issuing e-licenses within the power sector, we must confront thechallenge of managing the surging demand for data traffic. Within this realm, the network imposes stringentQuality of Service (QoS) requirements, revealing the inadequacies of traditional routing allocation mechanismsin accommodating such extensive data flows. In response to the imperative of handling a substantial influx of datarequests promptly and alleviating the constraints of existing technologies and network congestion, we present anarchitecture forQoS routing optimizationwith in SoftwareDefinedNetwork (SDN), leveraging deep reinforcementlearning. This innovative approach entails the separation of SDN control and transmission functionalities, centralizingcontrol over data forwardingwhile integrating deep reinforcement learning for informed routing decisions. Byfactoring in considerations such as delay, bandwidth, jitter rate, and packet loss rate, we design a reward function toguide theDeepDeterministic PolicyGradient (DDPG) algorithmin learning the optimal routing strategy to furnishsuperior QoS provision. In our empirical investigations, we juxtapose the performance of Deep ReinforcementLearning (DRL) against that of Shortest Path (SP) algorithms in terms of data packet transmission delay. Theexperimental simulation results show that our proposed algorithm has significant efficacy in reducing networkdelay and improving the overall transmission efficiency, which is superior to the traditional methods. 展开更多
关键词 Deep reinforcement learning SDN route optimization QOS
下载PDF
Role Dynamic Allocation of Human-Robot Cooperation Based on Reinforcement Learning in an Installation of Curtain Wall
18
作者 Zhiguang Liu Shilin Wang +2 位作者 Jian Zhao Jianhong Hao Fei Yu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期473-487,共15页
A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that ... A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk. 展开更多
关键词 Human-robot cooperation roles allocation reinforcement learning
下载PDF
Reinforcement Learning in Process Industries:Review and Perspective
19
作者 Oguzhan Dogru Junyao Xie +6 位作者 Om Prakash Ranjith Chiplunkar Jansen Soesanto Hongtian Chen Kirubakaran Velswamy Fadi Ibrahim Biao Huang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期283-300,共18页
This survey paper provides a review and perspective on intermediate and advanced reinforcement learning(RL)techniques in process industries. It offers a holistic approach by covering all levels of the process control ... This survey paper provides a review and perspective on intermediate and advanced reinforcement learning(RL)techniques in process industries. It offers a holistic approach by covering all levels of the process control hierarchy. The survey paper presents a comprehensive overview of RL algorithms,including fundamental concepts like Markov decision processes and different approaches to RL, such as value-based, policy-based, and actor-critic methods, while also discussing the relationship between classical control and RL. It further reviews the wide-ranging applications of RL in process industries, such as soft sensors, low-level control, high-level control, distributed process control, fault detection and fault tolerant control, optimization,planning, scheduling, and supply chain. The survey paper discusses the limitations and advantages, trends and new applications, and opportunities and future prospects for RL in process industries. Moreover, it highlights the need for a holistic approach in complex systems due to the growing importance of digitalization in the process industries. 展开更多
关键词 Process control process systems engineering reinforcement learning
下载PDF
Reinforcement learning based edge computing in B5G
20
作者 Jiachen Yang Yiwen Sun +4 位作者 Yutian Lei Zhuo Zhang Yang Li Yongjun Bao Zhihan Lv 《Digital Communications and Networks》 SCIE CSCD 2024年第1期1-6,共6页
The development of communication technology will promote the application of Internet of Things,and Beyond 5G will become a new technology promoter.At the same time,Beyond 5G will become one of the important supports f... The development of communication technology will promote the application of Internet of Things,and Beyond 5G will become a new technology promoter.At the same time,Beyond 5G will become one of the important supports for the development of edge computing technology.This paper proposes a communication task allocation algorithm based on deep reinforcement learning for vehicle-to-pedestrian communication scenarios in edge computing.Through trial and error learning of agent,the optimal spectrum and power can be determined for transmission without global information,so as to balance the communication between vehicle-to-pedestrian and vehicle-to-infrastructure.The results show that the agent can effectively improve vehicle-to-infrastructure communication rate as well as meeting the delay constraints on the vehicle-to-pedestrian link. 展开更多
关键词 reinforcement learning Edge computing Beyond 5G Vehicle-to-pedestrian
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部