Internet of Things(IoT)empowers imaginative applications and permits new services when mobile nodes are included.For IoT-enabled low-power and lossy networks(LLN),the Routing Protocol for Low-power and Lossy Networks(...Internet of Things(IoT)empowers imaginative applications and permits new services when mobile nodes are included.For IoT-enabled low-power and lossy networks(LLN),the Routing Protocol for Low-power and Lossy Networks(RPL)has become an established standard routing protocol.Mobility under standard RPL remains a difficult issue as it causes continuous path disturbance,energy loss,and increases the end-to-end delay in the network.In this unique circumstance,a Balanced-load and Energy-efficient RPL(BE-RPL)is proposed.It is a routing technique that is both energy-efficient and mobility-aware.It responds quicker to link breakage through received signal strength-based mobility monitoring and selecting a new preferred parent reactively.The proposed system also implements load balancing among stationary nodes for leaf node allocation.Static nodes with more leaf nodes are restricted from participating in the election for a new preferred parent.The performance of BE-RPL is assessed using the COOJA simulator.It improves the energy use,network control overhead,frame acknowledgment ratio,and packet delivery ratio of the network.展开更多
Real-time applications based on Wireless Sensor Network(WSN)tech-nologies are quickly increasing due to intelligent surroundings.Among the most significant resources in the WSN are battery power and security.Clustering...Real-time applications based on Wireless Sensor Network(WSN)tech-nologies are quickly increasing due to intelligent surroundings.Among the most significant resources in the WSN are battery power and security.Clustering stra-tegies improve the power factor and secure the WSN environment.It takes more electricity to forward data in a WSN.Though numerous clustering methods have been developed to provide energy consumption,there is indeed a risk of unequal load balancing,resulting in a decrease in the network’s lifetime due to network inequalities and less security.These possibilities arise due to the cluster head’s limited life span.These cluster heads(CH)are in charge of all activities and con-trol intra-cluster and inter-cluster interactions.The proposed method uses Lifetime centric load balancing mechanisms(LCLBM)and Cluster-based energy optimiza-tion using a mobile sink algorithm(CEOMS).LCLBM emphasizes the selection of CH,system architectures,and optimal distribution of CH.In addition,the LCLBM was added with an assistant cluster head(ACH)for load balancing.Power consumption,communications latency,the frequency of failing nodes,high security,and one-way delay are essential variables to consider while evaluating LCLBM.CEOMS will choose a cluster leader based on the influence of the fol-lowing parameters on the energy balance of WSNs.According to simulatedfind-ings,the suggested LCLBM-CEOMS method increases cluster head selection self-adaptability,improves the network’s lifetime,decreases data latency,and bal-ances network capacity.展开更多
Sensors are considered as important elements of electronic devices.In many applications and service,Wireless Sensor Networks(WSNs)are involved in significant data sharing that are delivered to the sink node in energy ...Sensors are considered as important elements of electronic devices.In many applications and service,Wireless Sensor Networks(WSNs)are involved in significant data sharing that are delivered to the sink node in energy efficient man-ner using multi-hop communications.But,the major challenge in WSN is the nodes are having limited battery resources,it is important to monitor the consumption rate of energy is very much needed.However,reducing energy con-sumption can increase the network lifetime in effective manner.For that,clustering methods are widely used for optimizing the rate of energy consumption among the sensor nodes.In that concern,this paper involves in deriving a novel model called Improved Load-Balanced Clustering for Energy-Aware Routing(ILBC-EAR),which mainly concentrates on optimal energy utilization with load-balanced process among cluster heads and member nodes.For providing equal rate of energy consumption among nodes,the dimensions of framed clusters are measured.Moreover,the model develops a Finest Routing Scheme based on Load-Balanced Clustering to transmit the sensed information to the sink or base station.The evaluation results depict that the derived energy aware model attains higher rate of life time than other works and also achieves balanced energy rate among head node.Additionally,the model also provides higher throughput and minimal delay in delivering data packets.展开更多
A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Ea...A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%.展开更多
Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV ...Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.展开更多
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci...This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.展开更多
Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the chall...Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.展开更多
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led...Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.展开更多
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The...With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.展开更多
Traditional traffic management techniques appear to be incompetent in complex data center networks, so proposes a load balancing strategy based on Long Short-Term Memory (LSTM) and quantum annealing by Software Define...Traditional traffic management techniques appear to be incompetent in complex data center networks, so proposes a load balancing strategy based on Long Short-Term Memory (LSTM) and quantum annealing by Software Defined Network (SDN) to dynamically predict the traffic and comprehensively consider the current and predicted load of the network in order to select the optimal forwarding path and balance the network load. Experiments have demonstrated that the algorithm achieves significant improvement in both system throughput and average packet loss rate for the purpose of improving network quality of service.展开更多
In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol...In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol for load transfer for load balancing. Groups are formed and every group has a node called a designated representative (DR). During load transferring processes, loads are transferred using the DR in each group to achieve load balancing purposes. The simulation results show that the performance of the protocol proposed is better than the compared conventional method. This protocol is more stable than the method without using the fuzzy logic control.展开更多
Software Defined Networking(SDN) provides flexible network management by decoupling control plane and data plane. However, such separation introduces the issues regarding the reliability of the control plane and contr...Software Defined Networking(SDN) provides flexible network management by decoupling control plane and data plane. However, such separation introduces the issues regarding the reliability of the control plane and controller load imbalance in the distributed SDN network, which will cause the low network stability and the poor controller performance. This paper proposes Reliable and Load balance-aware Multi-controller Deployment(RLMD) strategy to address the above problems. Firstly, we establish a multiple-controller network model and define the relevant parameters for RLMD. Then, we design the corresponding algorithms to implement this strategy. By weighing node efficiency and path quality, Controller Placement Selection(CPS) algorithm is introduced to explore the reliable deployments of the controllers. On this basis, we design Multiple Domain Partition(MDP) algorithm to allocate switches for controllers according to node attractability and controller load balancing rate, which could realize the reasonable domain planning. Finally, the simulations show that, compared with the typical strategies, RLMD has the better performance in improving the reliability of the control plane and balancing the distribution of the controller loads.展开更多
There are two key issues in distributed intrusion detection system,that is,maintaining load balance of system and protecting data integrity.To address these issues,this paper proposes a new distributed intrusion detec...There are two key issues in distributed intrusion detection system,that is,maintaining load balance of system and protecting data integrity.To address these issues,this paper proposes a new distributed intrusion detection model for big data based on nondestructive partitioning and balanced allocation.A data allocation strategy based on capacity and workload is introduced to achieve local load balance,and a dynamic load adjustment strategy is adopted to maintain global load balance of cluster.Moreover,data integrity is protected by using session reassemble and session partitioning.The simulation results show that the new model enjoys favorable advantages such as good load balance,higher detection rate and detection efficiency.展开更多
As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as ...As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as packet forwarding hardware,known as“OpenFlow switches”.Since load balancing service is essential to distribute workload across servers in data centers,we propose an effective load balancing scheme in SDN,using a genetic programming approach,called Genetic Programming based Load Balancing(GPLB).We formulate the problem to find a path:1)with the best bottleneck switch which has the lowest capacity within bottleneck switches of each path,2)with the shortest path,and 3)requiring the less possible operations.For the purpose of choosing the real-time least loaded path,GPLB immediately calculates the integrated load of paths based on the information that receives from the SDN controller.Hence,in this design,the controller sends the load information of each path to the load balancing algorithm periodically and then the load balancing algorithm returns a least loaded path to the controller.In this paper,we use the Mininet emulator and the OpenDaylight controller to evaluate the effectiveness of the GPLB.The simulative study of the GPLB shows that there is a big improvement in performance metrics and the latency and the jitter are minimized.The GPLB also has the maximum throughput in comparison with related works and has performed better in the heavy traffic situation.The results show that our model stands smartly while not increasing further overhead.展开更多
Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attracti...Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attractive applications for cloud computing is the concept of distributed information processing.Security,privacy,energy saving,reliability and load balancing are the major challenges facing cloud computing and most information technology innovations.Load balancing is the process of redistributing workload among all nodes in a network;to improve resource utilization and job response time,while avoiding overloading some nodes when other nodes are underloaded or idle is a major challenge.Thus,this research aims to design a novel load balancing systems in a cloud computing environment.The research is based on the modification of the existing approaches,namely;particle swarm optimization(PSO),honeybee,and ant colony optimization(ACO)with mathematical expression to form a novel approach called PACOHONEYBEE.The experiments were conducted on response time and throughput.The results of the response time of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:2791,2780,2784,2767,2727,and 2599(ms)respectively.The outcome of throughput of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:7451,7425,7398,7357,7387 and 7482(bps)respectively.It is observed that P-ACOHONEYBEE approach produces the lowest response time,high throughput and overall improved performance for the 10 nodes.The research has helped in managing the imbalance drawback by maximizing throughput,and reducing response time with scalability and reliability.展开更多
The aerodynamic performances of a passenger car and a box car with different heights of windbreak walls under strong wind were studied using the numerical simulations, and the changes of aerodynamic side force, lift f...The aerodynamic performances of a passenger car and a box car with different heights of windbreak walls under strong wind were studied using the numerical simulations, and the changes of aerodynamic side force, lift force and overturning moment with different wind speeds and wall heights were calculated. According to the principle of static moment balance of vehicles, the overturning coefficients of trains with different wind speeds and wall heights were obtained. Based on the influence of wind speed and wall height on the aerodynamic performance and the overturning stability of trains, a method of determination of the load balance ranges for the train operation safety was proposed, which made the overturning coefficient have nearly closed interval. A min(|A1|+|A2|), s.t. |A1|→|A2|(A1 refers to the downwind overturning coefficient and A2 refers to the upwind overturning coefficient)was found. This minimum value helps to lower the wall height as much as possible, and meanwhile, guarantees the operation safety of various types of trains under strong wind. This method has been used for the construction and improvement of the windbreak walls along the Lanzhou–Xinjiang railway(from Lanzhou to Urumqi, China).展开更多
Recently,sharded-blockchain has attracted more and more attention.Its inherited immutabili-ty,decentralization,and promoted scalability effectively address the trust issue of the data sharing in the Internet of Things...Recently,sharded-blockchain has attracted more and more attention.Its inherited immutabili-ty,decentralization,and promoted scalability effectively address the trust issue of the data sharing in the Internet of Things(IoT).Nevertheless,the traditional random allocation between validator groups and transaction pools ignores the differences of shards,which reduces the overall system per-formance due to the unbalance between computing capacity and transaction load.To solve this prob-lem,a load balance optimization framework for sharded-blockchain enabled IoT is proposed,where the allocation between the validator groups and transaction pools is implemented reasonably by deep reinforcement learning(DRL).Specifically,based on the theoretical analysis of the intra-shard consensus and the final system consensus,the optimization of system performance is formed as a Markov decision process(MDP),and the allocation of the transaction pools,the block size,and the block interval are jointly trained in the DRL agent.The simulation results show that the proposed scheme improves the scalability of the sharded blockchain system for IoT.展开更多
In recent times,the evolution of blockchain technology has got huge attention from the research community due to its versatile applications and unique security features.The IoT has shown wide adoption in various appli...In recent times,the evolution of blockchain technology has got huge attention from the research community due to its versatile applications and unique security features.The IoT has shown wide adoption in various applications including smart cities,healthcare,trade,business,etc.Among these applications,fitness applications have been widely considered for smart fitness systems.The users of the fitness system are increasing at a high rate thus the gym providers are constantly extending the fitness facilities.Thus,scheduling such a huge number of requests for fitness exercise is a big challenge.Secondly,the user fitness data is critical thus securing the user fitness data from unauthorized access is also challenging.To overcome these issues,this work proposed a blockchain-based load-balanced task scheduling approach.A thorough analysis has been performed to investigate the applications of IoT in the fitness industry and various scheduling approaches.The proposed scheduling approach aims to schedule the requests of the fitness users in a load-balanced way that maximize the acceptance rate of the users’requests and improve resource utilization.The performance of the proposed task scheduling approach is compared with the state-of-the-art approaches concerning the average resource utilization and task rejection ratio.The obtained results confirm the efficiency of the proposed scheduling approach.For investigating the performance of the blockchain,various experiments are performed using the Hyperledger Caliper concerning latency,throughput,resource utilization.The Solo approach has shown an improvement of 32%and 26%in throughput as compared to Raft and Solo-Raft approaches respectively.The obtained results assert that the proposed architecture is applicable for resource-constrained IoT applications and is extensible for different IoT applications.展开更多
The backup requirement of data centres is tremendous as the size of data created by human is massive and is increasing exponentially.Single node deduplication cannot meet the increasing backup requirement of data cent...The backup requirement of data centres is tremendous as the size of data created by human is massive and is increasing exponentially.Single node deduplication cannot meet the increasing backup requirement of data centres.A feasible way is the deduplication cluster,which can meet it by adding storage nodes.The data routing strategy is the key of the deduplication cluster.DRSS(data routing strategy using semantics) improves the storage utilization of MCS(minimum chunk signature) data routing strategy a lot.However,for the large deduplication cluster,the load balance of DRSS is worse than MCS.To improve the load balance of DRSS,we propose a load balance strategy used for DRSS,namely DRSSLB.When a node is overloaded,DRSSLB iteratively migrates the current smallest container of the node to the smallest node in the deduplication cluster until this overloaded node becomes non-overloaded.A container is the minimum unit of data migration.Similar files sharing the same features or file names are stored in the same container.This ensures the similar data groups are still in the same node after rebalancing the nodes.We use the dataset from the real world to evaluate DRSSLB.Experimental results show that,for various numbers of nodes of the deduplication cluster,the data skews of DRSSLB are under predefined value while the storage utilizations of DRSSLB do not nearly increase compared with DRSS,with the low penalty(the data migration rate is only6.5% when the number of nodes is 64).展开更多
If the draught of each mill stand is limited by forced bite condition for compact continuous mill,the rolling load difference between one mill stand and another is very big.If deforming regulation of relative load for...If the draught of each mill stand is limited by forced bite condition for compact continuous mill,the rolling load difference between one mill stand and another is very big.If deforming regulation of relative load for each mill stand is approximate to the same,the productive capacity of compact continuous mill can be brought into full play,and also the safety running and the smooth rolling of mill can be ensured.展开更多
文摘Internet of Things(IoT)empowers imaginative applications and permits new services when mobile nodes are included.For IoT-enabled low-power and lossy networks(LLN),the Routing Protocol for Low-power and Lossy Networks(RPL)has become an established standard routing protocol.Mobility under standard RPL remains a difficult issue as it causes continuous path disturbance,energy loss,and increases the end-to-end delay in the network.In this unique circumstance,a Balanced-load and Energy-efficient RPL(BE-RPL)is proposed.It is a routing technique that is both energy-efficient and mobility-aware.It responds quicker to link breakage through received signal strength-based mobility monitoring and selecting a new preferred parent reactively.The proposed system also implements load balancing among stationary nodes for leaf node allocation.Static nodes with more leaf nodes are restricted from participating in the election for a new preferred parent.The performance of BE-RPL is assessed using the COOJA simulator.It improves the energy use,network control overhead,frame acknowledgment ratio,and packet delivery ratio of the network.
文摘Real-time applications based on Wireless Sensor Network(WSN)tech-nologies are quickly increasing due to intelligent surroundings.Among the most significant resources in the WSN are battery power and security.Clustering stra-tegies improve the power factor and secure the WSN environment.It takes more electricity to forward data in a WSN.Though numerous clustering methods have been developed to provide energy consumption,there is indeed a risk of unequal load balancing,resulting in a decrease in the network’s lifetime due to network inequalities and less security.These possibilities arise due to the cluster head’s limited life span.These cluster heads(CH)are in charge of all activities and con-trol intra-cluster and inter-cluster interactions.The proposed method uses Lifetime centric load balancing mechanisms(LCLBM)and Cluster-based energy optimiza-tion using a mobile sink algorithm(CEOMS).LCLBM emphasizes the selection of CH,system architectures,and optimal distribution of CH.In addition,the LCLBM was added with an assistant cluster head(ACH)for load balancing.Power consumption,communications latency,the frequency of failing nodes,high security,and one-way delay are essential variables to consider while evaluating LCLBM.CEOMS will choose a cluster leader based on the influence of the fol-lowing parameters on the energy balance of WSNs.According to simulatedfind-ings,the suggested LCLBM-CEOMS method increases cluster head selection self-adaptability,improves the network’s lifetime,decreases data latency,and bal-ances network capacity.
文摘Sensors are considered as important elements of electronic devices.In many applications and service,Wireless Sensor Networks(WSNs)are involved in significant data sharing that are delivered to the sink node in energy efficient man-ner using multi-hop communications.But,the major challenge in WSN is the nodes are having limited battery resources,it is important to monitor the consumption rate of energy is very much needed.However,reducing energy con-sumption can increase the network lifetime in effective manner.For that,clustering methods are widely used for optimizing the rate of energy consumption among the sensor nodes.In that concern,this paper involves in deriving a novel model called Improved Load-Balanced Clustering for Energy-Aware Routing(ILBC-EAR),which mainly concentrates on optimal energy utilization with load-balanced process among cluster heads and member nodes.For providing equal rate of energy consumption among nodes,the dimensions of framed clusters are measured.Moreover,the model develops a Finest Routing Scheme based on Load-Balanced Clustering to transmit the sensed information to the sink or base station.The evaluation results depict that the derived energy aware model attains higher rate of life time than other works and also achieves balanced energy rate among head node.Additionally,the model also provides higher throughput and minimal delay in delivering data packets.
基金This work was supported by the National Key R&D Program of China(2021YFB2900604).
文摘A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%.
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807003in part by the National Natural Science Foundation of China under Grants 61901381,62171385,and 61901378+3 种基金in part by the Aeronautical Science Foundation of China under Grant 2020z073053004in part by the Foundation of the State Key Laboratory of Integrated Services Networks of Xidian University under Grant ISN21-06in part by the Key Research Program and Industrial Innovation Chain Project of Shaanxi Province under Grant 2019ZDLGY07-10in part by the Natural Science Fundamental Research Program of Shaanxi Province under Grant 2021JM-069.
文摘Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.
基金funded by the Science and Technology Foundation of State Grid Corporation of China(Grant No.5108-202218280A-2-397-XG).
文摘This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.
基金jointly supported by the Jiangsu Postgraduate Research and Practice Innovation Project under Grant KYCX22_1030,SJCX22_0283 and SJCX23_0293the NUPTSF under Grant NY220201.
文摘Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.
文摘Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.
文摘With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.
文摘Traditional traffic management techniques appear to be incompetent in complex data center networks, so proposes a load balancing strategy based on Long Short-Term Memory (LSTM) and quantum annealing by Software Defined Network (SDN) to dynamically predict the traffic and comprehensively consider the current and predicted load of the network in order to select the optimal forwarding path and balance the network load. Experiments have demonstrated that the algorithm achieves significant improvement in both system throughput and average packet loss rate for the purpose of improving network quality of service.
文摘In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol for load transfer for load balancing. Groups are formed and every group has a node called a designated representative (DR). During load transferring processes, loads are transferred using the DR in each group to achieve load balancing purposes. The simulation results show that the performance of the protocol proposed is better than the compared conventional method. This protocol is more stable than the method without using the fuzzy logic control.
基金supported in part by the Project of National Network Cyberspace Security (Grant No.2017YFB0803204)the National High-Tech Research and Development Program of China (863 Program) (Grant No. 2015AA016102)+1 种基金Foundation for Innovative Research Group of the National Natural Science Foundation of China (Grant No.61521003)Foundation for the National Natural Science Foundation of China (Grant No. 61502530)
文摘Software Defined Networking(SDN) provides flexible network management by decoupling control plane and data plane. However, such separation introduces the issues regarding the reliability of the control plane and controller load imbalance in the distributed SDN network, which will cause the low network stability and the poor controller performance. This paper proposes Reliable and Load balance-aware Multi-controller Deployment(RLMD) strategy to address the above problems. Firstly, we establish a multiple-controller network model and define the relevant parameters for RLMD. Then, we design the corresponding algorithms to implement this strategy. By weighing node efficiency and path quality, Controller Placement Selection(CPS) algorithm is introduced to explore the reliable deployments of the controllers. On this basis, we design Multiple Domain Partition(MDP) algorithm to allocate switches for controllers according to node attractability and controller load balancing rate, which could realize the reasonable domain planning. Finally, the simulations show that, compared with the typical strategies, RLMD has the better performance in improving the reliability of the control plane and balancing the distribution of the controller loads.
文摘There are two key issues in distributed intrusion detection system,that is,maintaining load balance of system and protecting data integrity.To address these issues,this paper proposes a new distributed intrusion detection model for big data based on nondestructive partitioning and balanced allocation.A data allocation strategy based on capacity and workload is introduced to achieve local load balance,and a dynamic load adjustment strategy is adopted to maintain global load balance of cluster.Moreover,data integrity is protected by using session reassemble and session partitioning.The simulation results show that the new model enjoys favorable advantages such as good load balance,higher detection rate and detection efficiency.
文摘As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as packet forwarding hardware,known as“OpenFlow switches”.Since load balancing service is essential to distribute workload across servers in data centers,we propose an effective load balancing scheme in SDN,using a genetic programming approach,called Genetic Programming based Load Balancing(GPLB).We formulate the problem to find a path:1)with the best bottleneck switch which has the lowest capacity within bottleneck switches of each path,2)with the shortest path,and 3)requiring the less possible operations.For the purpose of choosing the real-time least loaded path,GPLB immediately calculates the integrated load of paths based on the information that receives from the SDN controller.Hence,in this design,the controller sends the load information of each path to the load balancing algorithm periodically and then the load balancing algorithm returns a least loaded path to the controller.In this paper,we use the Mininet emulator and the OpenDaylight controller to evaluate the effectiveness of the GPLB.The simulative study of the GPLB shows that there is a big improvement in performance metrics and the latency and the jitter are minimized.The GPLB also has the maximum throughput in comparison with related works and has performed better in the heavy traffic situation.The results show that our model stands smartly while not increasing further overhead.
基金Taif University Researchers are supporting project number(TURSP-2020/211),Taif University,Taif,Saudi Arabia.
文摘Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attractive applications for cloud computing is the concept of distributed information processing.Security,privacy,energy saving,reliability and load balancing are the major challenges facing cloud computing and most information technology innovations.Load balancing is the process of redistributing workload among all nodes in a network;to improve resource utilization and job response time,while avoiding overloading some nodes when other nodes are underloaded or idle is a major challenge.Thus,this research aims to design a novel load balancing systems in a cloud computing environment.The research is based on the modification of the existing approaches,namely;particle swarm optimization(PSO),honeybee,and ant colony optimization(ACO)with mathematical expression to form a novel approach called PACOHONEYBEE.The experiments were conducted on response time and throughput.The results of the response time of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:2791,2780,2784,2767,2727,and 2599(ms)respectively.The outcome of throughput of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:7451,7425,7398,7357,7387 and 7482(bps)respectively.It is observed that P-ACOHONEYBEE approach produces the lowest response time,high throughput and overall improved performance for the 10 nodes.The research has helped in managing the imbalance drawback by maximizing throughput,and reducing response time with scalability and reliability.
基金Project(U1334203) supported by the National Natural Science Foundation of China
文摘The aerodynamic performances of a passenger car and a box car with different heights of windbreak walls under strong wind were studied using the numerical simulations, and the changes of aerodynamic side force, lift force and overturning moment with different wind speeds and wall heights were calculated. According to the principle of static moment balance of vehicles, the overturning coefficients of trains with different wind speeds and wall heights were obtained. Based on the influence of wind speed and wall height on the aerodynamic performance and the overturning stability of trains, a method of determination of the load balance ranges for the train operation safety was proposed, which made the overturning coefficient have nearly closed interval. A min(|A1|+|A2|), s.t. |A1|→|A2|(A1 refers to the downwind overturning coefficient and A2 refers to the upwind overturning coefficient)was found. This minimum value helps to lower the wall height as much as possible, and meanwhile, guarantees the operation safety of various types of trains under strong wind. This method has been used for the construction and improvement of the windbreak walls along the Lanzhou–Xinjiang railway(from Lanzhou to Urumqi, China).
基金Supported by the National Natural Science Foundation of China(No.61901011)the Foundation of Beijing Municipal Commission of Edu-cation(No.KM202010005017,KM202110005021).
文摘Recently,sharded-blockchain has attracted more and more attention.Its inherited immutabili-ty,decentralization,and promoted scalability effectively address the trust issue of the data sharing in the Internet of Things(IoT).Nevertheless,the traditional random allocation between validator groups and transaction pools ignores the differences of shards,which reduces the overall system per-formance due to the unbalance between computing capacity and transaction load.To solve this prob-lem,a load balance optimization framework for sharded-blockchain enabled IoT is proposed,where the allocation between the validator groups and transaction pools is implemented reasonably by deep reinforcement learning(DRL).Specifically,based on the theoretical analysis of the intra-shard consensus and the final system consensus,the optimization of system performance is formed as a Markov decision process(MDP),and the allocation of the transaction pools,the block size,and the block interval are jointly trained in the DRL agent.The simulation results show that the proposed scheme improves the scalability of the sharded blockchain system for IoT.
基金This research was supported by Energy Cloud R&D Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT(2019M3F2A1073387)this research was supported by Institute for Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2018-0-01456,AutoMaTa:Autonomous Management framework based on artificial intelligent Technology for adaptive and disposable IoT).Any correspondence related to this paper should be addressed to Do-hyeun Kim.Conflicts of Interest:The auth。
文摘In recent times,the evolution of blockchain technology has got huge attention from the research community due to its versatile applications and unique security features.The IoT has shown wide adoption in various applications including smart cities,healthcare,trade,business,etc.Among these applications,fitness applications have been widely considered for smart fitness systems.The users of the fitness system are increasing at a high rate thus the gym providers are constantly extending the fitness facilities.Thus,scheduling such a huge number of requests for fitness exercise is a big challenge.Secondly,the user fitness data is critical thus securing the user fitness data from unauthorized access is also challenging.To overcome these issues,this work proposed a blockchain-based load-balanced task scheduling approach.A thorough analysis has been performed to investigate the applications of IoT in the fitness industry and various scheduling approaches.The proposed scheduling approach aims to schedule the requests of the fitness users in a load-balanced way that maximize the acceptance rate of the users’requests and improve resource utilization.The performance of the proposed task scheduling approach is compared with the state-of-the-art approaches concerning the average resource utilization and task rejection ratio.The obtained results confirm the efficiency of the proposed scheduling approach.For investigating the performance of the blockchain,various experiments are performed using the Hyperledger Caliper concerning latency,throughput,resource utilization.The Solo approach has shown an improvement of 32%and 26%in throughput as compared to Raft and Solo-Raft approaches respectively.The obtained results assert that the proposed architecture is applicable for resource-constrained IoT applications and is extensible for different IoT applications.
基金supported by the National Natural Science Foundation of China under Grant No.61373120the Aeronautical Science Foundation of China under Grant No.2014ZD53049
文摘The backup requirement of data centres is tremendous as the size of data created by human is massive and is increasing exponentially.Single node deduplication cannot meet the increasing backup requirement of data centres.A feasible way is the deduplication cluster,which can meet it by adding storage nodes.The data routing strategy is the key of the deduplication cluster.DRSS(data routing strategy using semantics) improves the storage utilization of MCS(minimum chunk signature) data routing strategy a lot.However,for the large deduplication cluster,the load balance of DRSS is worse than MCS.To improve the load balance of DRSS,we propose a load balance strategy used for DRSS,namely DRSSLB.When a node is overloaded,DRSSLB iteratively migrates the current smallest container of the node to the smallest node in the deduplication cluster until this overloaded node becomes non-overloaded.A container is the minimum unit of data migration.Similar files sharing the same features or file names are stored in the same container.This ensures the similar data groups are still in the same node after rebalancing the nodes.We use the dataset from the real world to evaluate DRSSLB.Experimental results show that,for various numbers of nodes of the deduplication cluster,the data skews of DRSSLB are under predefined value while the storage utilizations of DRSSLB do not nearly increase compared with DRSS,with the low penalty(the data migration rate is only6.5% when the number of nodes is 64).
文摘If the draught of each mill stand is limited by forced bite condition for compact continuous mill,the rolling load difference between one mill stand and another is very big.If deforming regulation of relative load for each mill stand is approximate to the same,the productive capacity of compact continuous mill can be brought into full play,and also the safety running and the smooth rolling of mill can be ensured.