With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The...With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.展开更多
In power communication networks, it is a challenge to decrease the risk of different services efficiently to improve operation reliability. One of the important factor in reflecting communication risk is service route...In power communication networks, it is a challenge to decrease the risk of different services efficiently to improve operation reliability. One of the important factor in reflecting communication risk is service route distribution. However, existing routing algorithms do not take into account the degree of importance of services, thereby leading to load unbalancing and increasing the risks of services and networks. A routing optimization mechanism based on load balancing for power communication networks is proposed to address the abovementioned problems. First, the mechanism constructs an evaluation model to evaluate the service and network risk degree using combination of devices, service load, and service characteristics. Second, service weights are determined with modified relative entropy TOPSIS method, and a balanced service routing determination algorithm is proposed. Results of simulations on practical network topology show that the mechanism can optimize the network risk degree and load balancing degree efficiently.展开更多
Software Defined Networking(SDN) provides flexible network management by decoupling control plane from data plane. And multiple controllers are deployed to improve the scalability and reliability of the control plane,...Software Defined Networking(SDN) provides flexible network management by decoupling control plane from data plane. And multiple controllers are deployed to improve the scalability and reliability of the control plane, which could divide the network into several subdomains with separate controllers. However, such deployment introduces a new problem of controller load imbalance due to the dynamic traffic and the static configuration between switches and controllers. To address this issue, this paper proposes a Distribution Decision Mechanism(DDM) based on switch migration in the multiple subdomains SDN network. Firstly, through collecting network information, it constructs distributed migration decision fields based on the controller load condition. Then we choose the migrating switches according to the selection probability, and the target controllers are determined by integrating three network costs, including data collection, switch migration and controller state synchronization. Finally, we set the migrating countdown to achieve the ordered switch migration. Through verifying several evaluation indexes, results show that the proposed mechanism can achieve controller load balancing with better performance.展开更多
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci...This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.展开更多
Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV ...Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.展开更多
Internet of Vehicles(IoV)is a new style of vehicular ad hoc network that is used to connect the sensors of each vehicle with each other and with other vehicles’sensors through the internet.These sensors generate diff...Internet of Vehicles(IoV)is a new style of vehicular ad hoc network that is used to connect the sensors of each vehicle with each other and with other vehicles’sensors through the internet.These sensors generate different tasks that should be analyzed and processed in some given period of time.They send the tasks to the cloud servers but these sending operations increase bandwidth consumption and latency.Fog computing is a simple cloud at the network edge that is used to process the jobs in a short period of time instead of sending them to cloud computing facilities.In some situations,fog computing cannot execute some tasks due to lack of resources.Thus,in these situations it transfers them to cloud computing that leads to an increase in latency and bandwidth occupation again.Moreover,several fog servers may be fuelled while other servers are empty.This implies an unfair distribution of jobs.In this research study,we shall merge the software defined network(SDN)with IoV and fog computing and use the parked vehicle as assistant fog computing node.This can improve the capabilities of the fog computing layer and help in decreasing the number of migrated tasks to the cloud servers.This increases the ratio of time sensitive tasks that meet the deadline.In addition,a new load balancing strategy is proposed.It works proactively to balance the load locally and globally by the local fog managers and SDN controller,respectively.The simulation experiments show that the proposed system is more efficient than VANET-Fog-Cloud and IoV-Fog-Cloud frameworks in terms of average response time and percentage of bandwidth consumption,meeting the deadline,and resource utilization.展开更多
With the rapid development of electric power systems,load estimation plays an important role in system operation and planning.Usually,load estimation techniques contain traditional,time series,regression analysis-base...With the rapid development of electric power systems,load estimation plays an important role in system operation and planning.Usually,load estimation techniques contain traditional,time series,regression analysis-based,and machine learning-based estimation.Since the machine learning-based method can lead to better performance,in this paper,a deep learning-based load estimation algorithm using image fingerprint and attention mechanism is proposed.First,an image fingerprint construction is proposed for training data.After the data preprocessing,the training data matrix is constructed by the cyclic shift and cubic spline interpolation.Then,the linear mapping and the gray-color transformation method are proposed to form the color image fingerprint.Second,a convolutional neural network(CNN)combined with an attentionmechanism is proposed for training performance improvement.At last,an experiment is carried out to evaluate the estimation performance.Compared with the support vector machine method,CNN method and long short-term memory method,the proposed algorithm has the best load estimation performance.展开更多
Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource ...Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource utilization efficiency of the edge device, and solve the problem about service computing of the delay-sensitive applications. This paper researches on the framework of the fog computing, and adopts Cloud Atomization Technology to turn physical nodes in different levels into virtual machine nodes. On this basis, this paper uses the graph partitioning theory to build the fog computing's load balancing algorithm based on dynamic graph partitioning. The simulation results show that the framework of the fog computing after Cloud Atomization can build the system network flexibly, and dynamic load balancing mechanism can effectively configure system resources as well as reducing the consumption of node migration brought by system changes.展开更多
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led...Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.展开更多
Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a s...Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a single prediction model is hard to capture temporal features effectively, resulting in diminished predictionaccuracy. In this study, a hybrid deep learning framework that integrates attention mechanism, convolution neuralnetwork (CNN), improved chaotic particle swarm optimization (ICPSO), and long short-term memory (LSTM), isproposed for short-term household load forecasting. Firstly, the CNN model is employed to extract features fromthe original data, enhancing the quality of data features. Subsequently, the moving average method is used for datapreprocessing, followed by the application of the LSTM network to predict the processed data. Moreover, the ICPSOalgorithm is introduced to optimize the parameters of LSTM, aimed at boosting the model’s running speed andaccuracy. Finally, the attention mechanism is employed to optimize the output value of LSTM, effectively addressinginformation loss in LSTM induced by lengthy sequences and further elevating prediction accuracy. According tothe numerical analysis, the accuracy and effectiveness of the proposed hybrid model have been verified. It canexplore data features adeptly, achieving superior prediction accuracy compared to other forecasting methods forthe household load exhibiting significant fluctuations across different seasons.展开更多
In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol...In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol for load transfer for load balancing. Groups are formed and every group has a node called a designated representative (DR). During load transferring processes, loads are transferred using the DR in each group to achieve load balancing purposes. The simulation results show that the performance of the protocol proposed is better than the compared conventional method. This protocol is more stable than the method without using the fuzzy logic control.展开更多
Cloud providers(e.g.,Google,Alibaba,Amazon)own large-scale datacenter networks that comprise thousands of switches and links.A loadbalancing mechanism is supposed to effectively utilize the bisection bandwidth.Both Eq...Cloud providers(e.g.,Google,Alibaba,Amazon)own large-scale datacenter networks that comprise thousands of switches and links.A loadbalancing mechanism is supposed to effectively utilize the bisection bandwidth.Both Equal-Cost Multi-Path(ECMP),the canonical solution in practice,and alternatives come with performance limitations or significant deployment challenges.In this work,we propose Closer,a scalable load balancing mechanism for cloud datacenters.Closer complies with the evaluation of technology including the deployment of Clos-based topologies,overlays for network virtualization,and virtual machine(VM)clusters.We decouple the system into centralized route calculation and distributed route decision to guarantee its flexibility and stability in large-scale networks.Leveraging In-band Network Telemetry(INT)to obtain precise link state information,a simple but efficient algorithm implements a weighted ECMP at the edge of fabric,which enables Closer to proactively map the flows to the appropriate path and avoid the excessive congestion of a single link.Closer achieves 2 to 7 times better flow completion time(FCT)at 70%network load than existing schemes that work with same hardware environment.展开更多
To solve the load balancing problem in a triplet-based hierarchical interconnection network(THIN) system, a dynamic load balancing (DLB)algorithm--THINDLBA, which adopts multicast tree (MT)technology to improve ...To solve the load balancing problem in a triplet-based hierarchical interconnection network(THIN) system, a dynamic load balancing (DLB)algorithm--THINDLBA, which adopts multicast tree (MT)technology to improve the efficiency of interchanging load information, is presented. To support the algorithm, a complete set of DLB messages and a schema of maintaining DLB information in each processing node are designed. The load migration request messages from the heavily loaded node (HLN)are spread along an MT whose root is the HLN. And the lightly loaded nodes(LLNs) covered by the MT are the candidate destinations of load migration; the load information interchanged between the LLNs and the HLN can be transmitted along the MT. So the HLN can migrate excess loads out as many as possible during a one time execution of the THINDLBA, and its load state can be improved as quickly as possible. To avoid wrongly transmitted or redundant DLB messages due to MT overlapping, the MT construction is restricted in the design of the THINDLBA. Through experiments, the effectiveness of four DLB algorithms are compared, and the results show that the THINDLBA can effectively decrease the time costs of THIN systems in dealing with large scale computeintensive tasks more than others.展开更多
To improve data distribution efficiency a load-balancing data distribution LBDD method is proposed in publish/subscribe mode.In the LBDD method subscribers are involved in distribution tasks and data transfers while r...To improve data distribution efficiency a load-balancing data distribution LBDD method is proposed in publish/subscribe mode.In the LBDD method subscribers are involved in distribution tasks and data transfers while receiving data themselves.A dissemination tree is constructed among the subscribers based on MD5 where the publisher acts as the root. The proposed method provides bucket construction target selection and path updates furthermore the property of one-way dissemination is proven.That the average out-going degree of a node is 2 is guaranteed with the proposed LBDD.The experiments on data distribution delay data distribution rate and load distribution are conducted. Experimental results show that the LBDD method aids in shaping the task load between the publisher and subscribers and outperforms the point-to-point approach.展开更多
This paper focuses on solving a problem of improving system robustness and the efficiency of a distributed system at the same time. Fault tolerance with active replication and load balancing techniques are used. The p...This paper focuses on solving a problem of improving system robustness and the efficiency of a distributed system at the same time. Fault tolerance with active replication and load balancing techniques are used. The pros and cons of both techniques are analyzed, and a novel load balancing framework for fault tolerant systems with active replication is presented. Hierarchical architecture is described in detail. The framework can dynamically adjust fault tolerant groups and their memberships with respect to system loads. Three potential task scheduler group selection methods are proposed and simulation tests are made. Further analysis of test data is done and helpful observations for system design are also pointed out, including effects of task arrival intensity and task set size, relationship between total task execution time and single task execution time.展开更多
Rock failure phenomena,such as rockburst,slabbing(or spalling) and zonal disintegration,related to deep underground excavation of hard rocks are frequently reported and pose a great threat to deep mining.Currently,the...Rock failure phenomena,such as rockburst,slabbing(or spalling) and zonal disintegration,related to deep underground excavation of hard rocks are frequently reported and pose a great threat to deep mining.Currently,the explanation for these failure phenomena using existing dynamic or static rock mechanics theory is not straightforward.In this study,new theory and testing method for deep underground rock mass under coupled static-dynamic loading are introduced.Two types of coupled loading modes,i.e.'critical static stress + slight disturbance' and 'elastic static stress + impact disturbance',are proposed,and associated test devices are developed.Rockburst phenomena of hard rocks under coupled static-dynamic loading are successfully reproduced in the laboratory,and the rockburst mechanism and related criteria are demonstrated.The results of true triaxial unloading compression tests on granite and red sandstone indicate that the unloading can induce slabbing when the confining pressure exceeds a certain threshold,and the slabbing failure strength is lower than the shear failure strength according to the conventional Mohr-Column criterion.Numerical results indicate that the rock unloading failure response under different in situ stresses and unloading rates can be characterized by an equivalent strain energy density.In addition,we present a new microseismic source location method without premeasuring the sound wave velocity in rock mass,which can efficiently and accurately locate the rock failure in hard rock mines.Also,a new idea for deep hard rock mining using a non-explosive continuous mining method is briefly introduced.展开更多
The Internet of Vehicles(IoV)has been widely researched in recent years,and cloud computing has been one of the key technologies in the IoV.Although cloud computing provides high performance compute,storage and networ...The Internet of Vehicles(IoV)has been widely researched in recent years,and cloud computing has been one of the key technologies in the IoV.Although cloud computing provides high performance compute,storage and networking services,the IoV still suffers with high processing latency,less mobility support and location awareness.In this paper,we integrate fog computing and software defined networking(SDN) to address those problems.Fog computing extends computing and storing to the edge of the network,which could decrease latency remarkably in addition to enable mobility support and location awareness.Meanwhile,SDN provides flexible centralized control and global knowledge to the network.In order to apply the software defined cloud/fog networking(SDCFN) architecture in the IoV effectively,we propose a novel SDN-based modified constrained optimization particle swarm optimization(MPSO-CO) algorithm which uses the reverse of the flight of mutation particles and linear decrease inertia weight to enhance the performance of constrained optimization particle swarm optimization(PSO-CO).The simulation results indicate that the SDN-based MPSO-CO algorithm could effectively decrease the latency and improve the quality of service(QoS) in the SDCFN architecture.展开更多
High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation ba...High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.展开更多
The working platforms supported with multiple extensible legs must be leveled before they come into operation.Although the supporting stiffness and reliability of the platform are improved with the increasing number o...The working platforms supported with multiple extensible legs must be leveled before they come into operation.Although the supporting stiffness and reliability of the platform are improved with the increasing number of the supporting legs,the increased overdetermination of the multi-leg platform systems leads to leveling coupling problem among legs and virtual leg problem in which some of the supporting legs bear zero or quasi zero loads.These problems make it quite complex and time consuming to level such a multi-leg platform.Based on rigid body kinematics,an approximate equation is formulated to rapidly calculate the leg extension for leveling a rigid platform,then a proportional speed control strategy is proposed to reduce the unexpected platform distortion and leveling coupling between supporting legs.Taking both the load coupling between supporting legs and the elastic flexibility of the working platform into consideration,an optimal balancing legs’ loads(OBLL) model is firstly put forward to deal with the traditional virtual leg problem.By taking advantage of the concept of supporting stiffness matrix,a coupling extension method(CEM) is developed to solve this OBLL problem for multi-leg flexible platform.At the end,with the concept of supporting stiffness matrix and static transmissibility matrix,an optimal load balancing leveling method is proposed to achieve geometric leveling and legs’ loads balancing simultaneously.Three numerical examples are given out to illustrate the performance of proposed methods.This paper proposes a method which can effectively quantify all of the legs’ extension at the same time,achieve geometric leveling and legs’ loads balancing simultaneously.By using the proposed methods,the stability,precision and efficiency of auto-leveling control process can be improved.展开更多
One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consider...One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.展开更多
文摘With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.
基金supported by the State Grid project which names the simulation and service quality evaluation technology research of power communication network(No.XX71-14-046)
文摘In power communication networks, it is a challenge to decrease the risk of different services efficiently to improve operation reliability. One of the important factor in reflecting communication risk is service route distribution. However, existing routing algorithms do not take into account the degree of importance of services, thereby leading to load unbalancing and increasing the risks of services and networks. A routing optimization mechanism based on load balancing for power communication networks is proposed to address the abovementioned problems. First, the mechanism constructs an evaluation model to evaluate the service and network risk degree using combination of devices, service load, and service characteristics. Second, service weights are determined with modified relative entropy TOPSIS method, and a balanced service routing determination algorithm is proposed. Results of simulations on practical network topology show that the mechanism can optimize the network risk degree and load balancing degree efficiently.
基金supported in part by This work is supported by the Project of National Network Cyberspace Security(Grant No.2017YFB0803204)the National High-Tech Research and Development Program of China(863 Program)(Grant No.2015AA016102)+1 种基金Foundation for Innovative Research Group of the National Natural Science Foundation of China(Grant No.61521003)Foundation for the National Natural Science Foundation of China(Grant No.61502530)
文摘Software Defined Networking(SDN) provides flexible network management by decoupling control plane from data plane. And multiple controllers are deployed to improve the scalability and reliability of the control plane, which could divide the network into several subdomains with separate controllers. However, such deployment introduces a new problem of controller load imbalance due to the dynamic traffic and the static configuration between switches and controllers. To address this issue, this paper proposes a Distribution Decision Mechanism(DDM) based on switch migration in the multiple subdomains SDN network. Firstly, through collecting network information, it constructs distributed migration decision fields based on the controller load condition. Then we choose the migrating switches according to the selection probability, and the target controllers are determined by integrating three network costs, including data collection, switch migration and controller state synchronization. Finally, we set the migrating countdown to achieve the ordered switch migration. Through verifying several evaluation indexes, results show that the proposed mechanism can achieve controller load balancing with better performance.
基金funded by the Science and Technology Foundation of State Grid Corporation of China(Grant No.5108-202218280A-2-397-XG).
文摘This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807003in part by the National Natural Science Foundation of China under Grants 61901381,62171385,and 61901378+3 种基金in part by the Aeronautical Science Foundation of China under Grant 2020z073053004in part by the Foundation of the State Key Laboratory of Integrated Services Networks of Xidian University under Grant ISN21-06in part by the Key Research Program and Industrial Innovation Chain Project of Shaanxi Province under Grant 2019ZDLGY07-10in part by the Natural Science Fundamental Research Program of Shaanxi Province under Grant 2021JM-069.
文摘Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.
文摘Internet of Vehicles(IoV)is a new style of vehicular ad hoc network that is used to connect the sensors of each vehicle with each other and with other vehicles’sensors through the internet.These sensors generate different tasks that should be analyzed and processed in some given period of time.They send the tasks to the cloud servers but these sending operations increase bandwidth consumption and latency.Fog computing is a simple cloud at the network edge that is used to process the jobs in a short period of time instead of sending them to cloud computing facilities.In some situations,fog computing cannot execute some tasks due to lack of resources.Thus,in these situations it transfers them to cloud computing that leads to an increase in latency and bandwidth occupation again.Moreover,several fog servers may be fuelled while other servers are empty.This implies an unfair distribution of jobs.In this research study,we shall merge the software defined network(SDN)with IoV and fog computing and use the parked vehicle as assistant fog computing node.This can improve the capabilities of the fog computing layer and help in decreasing the number of migrated tasks to the cloud servers.This increases the ratio of time sensitive tasks that meet the deadline.In addition,a new load balancing strategy is proposed.It works proactively to balance the load locally and globally by the local fog managers and SDN controller,respectively.The simulation experiments show that the proposed system is more efficient than VANET-Fog-Cloud and IoV-Fog-Cloud frameworks in terms of average response time and percentage of bandwidth consumption,meeting the deadline,and resource utilization.
文摘With the rapid development of electric power systems,load estimation plays an important role in system operation and planning.Usually,load estimation techniques contain traditional,time series,regression analysis-based,and machine learning-based estimation.Since the machine learning-based method can lead to better performance,in this paper,a deep learning-based load estimation algorithm using image fingerprint and attention mechanism is proposed.First,an image fingerprint construction is proposed for training data.After the data preprocessing,the training data matrix is constructed by the cyclic shift and cubic spline interpolation.Then,the linear mapping and the gray-color transformation method are proposed to form the color image fingerprint.Second,a convolutional neural network(CNN)combined with an attentionmechanism is proposed for training performance improvement.At last,an experiment is carried out to evaluate the estimation performance.Compared with the support vector machine method,CNN method and long short-term memory method,the proposed algorithm has the best load estimation performance.
基金supported in part by the National Science and technology support program of P.R.China(No.2014BAH29F05)
文摘Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource utilization efficiency of the edge device, and solve the problem about service computing of the delay-sensitive applications. This paper researches on the framework of the fog computing, and adopts Cloud Atomization Technology to turn physical nodes in different levels into virtual machine nodes. On this basis, this paper uses the graph partitioning theory to build the fog computing's load balancing algorithm based on dynamic graph partitioning. The simulation results show that the framework of the fog computing after Cloud Atomization can build the system network flexibly, and dynamic load balancing mechanism can effectively configure system resources as well as reducing the consumption of node migration brought by system changes.
文摘Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.
基金the Shanghai Rising-Star Program(No.22QA1403900)the National Natural Science Foundation of China(No.71804106)the Noncarbon Energy Conversion and Utilization Institute under the Shanghai Class IV Peak Disciplinary Development Program.
文摘Accurate load forecasting forms a crucial foundation for implementing household demand response plans andoptimizing load scheduling. When dealing with short-term load data characterized by substantial fluctuations,a single prediction model is hard to capture temporal features effectively, resulting in diminished predictionaccuracy. In this study, a hybrid deep learning framework that integrates attention mechanism, convolution neuralnetwork (CNN), improved chaotic particle swarm optimization (ICPSO), and long short-term memory (LSTM), isproposed for short-term household load forecasting. Firstly, the CNN model is employed to extract features fromthe original data, enhancing the quality of data features. Subsequently, the moving average method is used for datapreprocessing, followed by the application of the LSTM network to predict the processed data. Moreover, the ICPSOalgorithm is introduced to optimize the parameters of LSTM, aimed at boosting the model’s running speed andaccuracy. Finally, the attention mechanism is employed to optimize the output value of LSTM, effectively addressinginformation loss in LSTM induced by lengthy sequences and further elevating prediction accuracy. According tothe numerical analysis, the accuracy and effectiveness of the proposed hybrid model have been verified. It canexplore data features adeptly, achieving superior prediction accuracy compared to other forecasting methods forthe household load exhibiting significant fluctuations across different seasons.
文摘In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol for load transfer for load balancing. Groups are formed and every group has a node called a designated representative (DR). During load transferring processes, loads are transferred using the DR in each group to achieve load balancing purposes. The simulation results show that the performance of the protocol proposed is better than the compared conventional method. This protocol is more stable than the method without using the fuzzy logic control.
基金supported by National Key Research and Development Project of China(2019YFB1802501)Research and Development Program in Key Areas of Guangdong Province(2018B010113001)Open Foundation of Science and Technology on Communication Networks Laboratory(No.6142104180106)。
文摘Cloud providers(e.g.,Google,Alibaba,Amazon)own large-scale datacenter networks that comprise thousands of switches and links.A loadbalancing mechanism is supposed to effectively utilize the bisection bandwidth.Both Equal-Cost Multi-Path(ECMP),the canonical solution in practice,and alternatives come with performance limitations or significant deployment challenges.In this work,we propose Closer,a scalable load balancing mechanism for cloud datacenters.Closer complies with the evaluation of technology including the deployment of Clos-based topologies,overlays for network virtualization,and virtual machine(VM)clusters.We decouple the system into centralized route calculation and distributed route decision to guarantee its flexibility and stability in large-scale networks.Leveraging In-band Network Telemetry(INT)to obtain precise link state information,a simple but efficient algorithm implements a weighted ECMP at the edge of fabric,which enables Closer to proactively map the flows to the appropriate path and avoid the excessive congestion of a single link.Closer achieves 2 to 7 times better flow completion time(FCT)at 70%network load than existing schemes that work with same hardware environment.
基金The National Natural Science Foundation of China(No.69973007).
文摘To solve the load balancing problem in a triplet-based hierarchical interconnection network(THIN) system, a dynamic load balancing (DLB)algorithm--THINDLBA, which adopts multicast tree (MT)technology to improve the efficiency of interchanging load information, is presented. To support the algorithm, a complete set of DLB messages and a schema of maintaining DLB information in each processing node are designed. The load migration request messages from the heavily loaded node (HLN)are spread along an MT whose root is the HLN. And the lightly loaded nodes(LLNs) covered by the MT are the candidate destinations of load migration; the load information interchanged between the LLNs and the HLN can be transmitted along the MT. So the HLN can migrate excess loads out as many as possible during a one time execution of the THINDLBA, and its load state can be improved as quickly as possible. To avoid wrongly transmitted or redundant DLB messages due to MT overlapping, the MT construction is restricted in the design of the THINDLBA. Through experiments, the effectiveness of four DLB algorithms are compared, and the results show that the THINDLBA can effectively decrease the time costs of THIN systems in dealing with large scale computeintensive tasks more than others.
基金The National Key Basic Research Program of China(973 Program)
文摘To improve data distribution efficiency a load-balancing data distribution LBDD method is proposed in publish/subscribe mode.In the LBDD method subscribers are involved in distribution tasks and data transfers while receiving data themselves.A dissemination tree is constructed among the subscribers based on MD5 where the publisher acts as the root. The proposed method provides bucket construction target selection and path updates furthermore the property of one-way dissemination is proven.That the average out-going degree of a node is 2 is guaranteed with the proposed LBDD.The experiments on data distribution delay data distribution rate and load distribution are conducted. Experimental results show that the LBDD method aids in shaping the task load between the publisher and subscribers and outperforms the point-to-point approach.
文摘This paper focuses on solving a problem of improving system robustness and the efficiency of a distributed system at the same time. Fault tolerance with active replication and load balancing techniques are used. The pros and cons of both techniques are analyzed, and a novel load balancing framework for fault tolerant systems with active replication is presented. Hierarchical architecture is described in detail. The framework can dynamically adjust fault tolerant groups and their memberships with respect to system loads. Three potential task scheduler group selection methods are proposed and simulation tests are made. Further analysis of test data is done and helpful observations for system design are also pointed out, including effects of task arrival intensity and task set size, relationship between total task execution time and single task execution time.
基金jointly supported by the State Key Research Development Program of China (Grant No.2016YFC0600706)the National Natural Science Foundation of China (Grant Nos.41630642 and 11472311)
文摘Rock failure phenomena,such as rockburst,slabbing(or spalling) and zonal disintegration,related to deep underground excavation of hard rocks are frequently reported and pose a great threat to deep mining.Currently,the explanation for these failure phenomena using existing dynamic or static rock mechanics theory is not straightforward.In this study,new theory and testing method for deep underground rock mass under coupled static-dynamic loading are introduced.Two types of coupled loading modes,i.e.'critical static stress + slight disturbance' and 'elastic static stress + impact disturbance',are proposed,and associated test devices are developed.Rockburst phenomena of hard rocks under coupled static-dynamic loading are successfully reproduced in the laboratory,and the rockburst mechanism and related criteria are demonstrated.The results of true triaxial unloading compression tests on granite and red sandstone indicate that the unloading can induce slabbing when the confining pressure exceeds a certain threshold,and the slabbing failure strength is lower than the shear failure strength according to the conventional Mohr-Column criterion.Numerical results indicate that the rock unloading failure response under different in situ stresses and unloading rates can be characterized by an equivalent strain energy density.In addition,we present a new microseismic source location method without premeasuring the sound wave velocity in rock mass,which can efficiently and accurately locate the rock failure in hard rock mines.Also,a new idea for deep hard rock mining using a non-explosive continuous mining method is briefly introduced.
基金supported in part by National Natural Science Foundation of China (No.61401331,No.61401328)111 Project in Xidian University of China(B08038)+2 种基金Hong Kong,Macao and Taiwan Science and Technology Cooperation Special Project (2014DFT10320,2015DFT10160)The National Science and Technology Major Project of the Ministry of Science and Technology of China(2015zx03002006-003)FundamentalResearch Funds for the Central Universities (20101155739)
文摘The Internet of Vehicles(IoV)has been widely researched in recent years,and cloud computing has been one of the key technologies in the IoV.Although cloud computing provides high performance compute,storage and networking services,the IoV still suffers with high processing latency,less mobility support and location awareness.In this paper,we integrate fog computing and software defined networking(SDN) to address those problems.Fog computing extends computing and storing to the edge of the network,which could decrease latency remarkably in addition to enable mobility support and location awareness.Meanwhile,SDN provides flexible centralized control and global knowledge to the network.In order to apply the software defined cloud/fog networking(SDCFN) architecture in the IoV effectively,we propose a novel SDN-based modified constrained optimization particle swarm optimization(MPSO-CO) algorithm which uses the reverse of the flight of mutation particles and linear decrease inertia weight to enhance the performance of constrained optimization particle swarm optimization(PSO-CO).The simulation results indicate that the SDN-based MPSO-CO algorithm could effectively decrease the latency and improve the quality of service(QoS) in the SDCFN architecture.
基金supported by National Science and Technology Support Program of China (Grant No. 2012BAF15G00)
文摘High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.
基金supported by Shandong Provincial Natural Science Foundation of China(Grant No.ZR2010EL003)
文摘The working platforms supported with multiple extensible legs must be leveled before they come into operation.Although the supporting stiffness and reliability of the platform are improved with the increasing number of the supporting legs,the increased overdetermination of the multi-leg platform systems leads to leveling coupling problem among legs and virtual leg problem in which some of the supporting legs bear zero or quasi zero loads.These problems make it quite complex and time consuming to level such a multi-leg platform.Based on rigid body kinematics,an approximate equation is formulated to rapidly calculate the leg extension for leveling a rigid platform,then a proportional speed control strategy is proposed to reduce the unexpected platform distortion and leveling coupling between supporting legs.Taking both the load coupling between supporting legs and the elastic flexibility of the working platform into consideration,an optimal balancing legs’ loads(OBLL) model is firstly put forward to deal with the traditional virtual leg problem.By taking advantage of the concept of supporting stiffness matrix,a coupling extension method(CEM) is developed to solve this OBLL problem for multi-leg flexible platform.At the end,with the concept of supporting stiffness matrix and static transmissibility matrix,an optimal load balancing leveling method is proposed to achieve geometric leveling and legs’ loads balancing simultaneously.Three numerical examples are given out to illustrate the performance of proposed methods.This paper proposes a method which can effectively quantify all of the legs’ extension at the same time,achieve geometric leveling and legs’ loads balancing simultaneously.By using the proposed methods,the stability,precision and efficiency of auto-leveling control process can be improved.
基金supported by Scientific Research Foundation for the Returned Overseas Chinese ScholarsState Education Ministry under Grant No.2010-2011 and Chinese Post-doctoral Research Foundation
文摘One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.