Numerous clothing enterprises in the market have a relatively low efficiency of assembly line planning due to insufficient optimization of bottleneck stations.As a result,the production efficiency of the enterprise is...Numerous clothing enterprises in the market have a relatively low efficiency of assembly line planning due to insufficient optimization of bottleneck stations.As a result,the production efficiency of the enterprise is not high,and the production organization is not up to expectations.Aiming at the problem of flexible process route planning in garment workshops,a multi-object genetic algorithm is proposed to solve the assembly line bal-ance optimization problem and minimize the machine adjustment path.The encoding method adopts the object-oriented path representation method,and the initial population is generated by random topology sorting based on an in-degree selection mechanism.The multi-object genetic algorithm improves the mutation and crossover operations according to the characteristics of the clothing process to avoid the generation of invalid offspring.In the iterative process,the bottleneck station is optimized by reasonable process splitting,and process allocation conforms to the strict limit of the station on the number of machines in order to improve the compilation efficiency.The effectiveness and feasibility of the multi-object genetic algorithm are proven by the analysis of clothing cases.Compared with the artificial allocation process,the compilation efficiency of MOGA is increased by more than 15%and completes the optimization of the minimum machine adjustment path.The results are in line with the expected optimization effect.展开更多
In recent times,the evolution of blockchain technology has got huge attention from the research community due to its versatile applications and unique security features.The IoT has shown wide adoption in various appli...In recent times,the evolution of blockchain technology has got huge attention from the research community due to its versatile applications and unique security features.The IoT has shown wide adoption in various applications including smart cities,healthcare,trade,business,etc.Among these applications,fitness applications have been widely considered for smart fitness systems.The users of the fitness system are increasing at a high rate thus the gym providers are constantly extending the fitness facilities.Thus,scheduling such a huge number of requests for fitness exercise is a big challenge.Secondly,the user fitness data is critical thus securing the user fitness data from unauthorized access is also challenging.To overcome these issues,this work proposed a blockchain-based load-balanced task scheduling approach.A thorough analysis has been performed to investigate the applications of IoT in the fitness industry and various scheduling approaches.The proposed scheduling approach aims to schedule the requests of the fitness users in a load-balanced way that maximize the acceptance rate of the users’requests and improve resource utilization.The performance of the proposed task scheduling approach is compared with the state-of-the-art approaches concerning the average resource utilization and task rejection ratio.The obtained results confirm the efficiency of the proposed scheduling approach.For investigating the performance of the blockchain,various experiments are performed using the Hyperledger Caliper concerning latency,throughput,resource utilization.The Solo approach has shown an improvement of 32%and 26%in throughput as compared to Raft and Solo-Raft approaches respectively.The obtained results assert that the proposed architecture is applicable for resource-constrained IoT applications and is extensible for different IoT applications.展开更多
Large-scale and diverse businesses based on the cloud computing platform bring the heavy network traffic to cloud data centers.However,the unbalanced workload of cloud data center network easily leads to the network c...Large-scale and diverse businesses based on the cloud computing platform bring the heavy network traffic to cloud data centers.However,the unbalanced workload of cloud data center network easily leads to the network congestion,the low resource utilization rate,the long delay,the low reliability,and the low throughput.In order to improve the utilization efficiency and the quality of services(QoS)of cloud system,especially to solve the problem of network congestion,we propose MTSS,a multi-path traffic scheduling mechanism based on software defined networking(SDN).MTSS utilizes the data flow scheduling flexibility of SDN and the multi-path feature of the fat-tree structure to improve the traffic balance of the cloud data center network.A heuristic traffic balancing algorithm is presented for MTSS,which periodically monitors the network link and dynamically adjusts the traffic on the heavy link to achieve programmable data forwarding and load balancing.The experimental results show that MTSS outperforms equal-cost multi-path protocol(ECMP),by effectively reducing the packet loss rate and delay.In addition,MTSS improves the utilization efficiency,the reliability and the throughput rate of the cloud data center network.展开更多
Cloud computing distributes task-parallel among the various resources.Applications with self-service supported and on-demand service have rapid growth.For these applications,cloud computing allocates the resources dyn...Cloud computing distributes task-parallel among the various resources.Applications with self-service supported and on-demand service have rapid growth.For these applications,cloud computing allocates the resources dynami-cally via the internet according to user requirements.Proper resource allocation is vital for fulfilling user requirements.In contrast,improper resource allocations result to load imbalance,which leads to severe service issues.The cloud resources implement internet-connected devices using the protocols for storing,communi-cating,and computations.The extensive needs and lack of optimal resource allo-cating scheme make cloud computing more complex.This paper proposes an NMDS(Network Manager based Dynamic Scheduling)for achieving a prominent resource allocation scheme for the users.The proposed system mainly focuses on dimensionality problems,where the conventional methods fail to address them.The proposed system introduced three–threshold mode of task based on its size STT,MTT,LTT(small,medium,large task thresholding).Along with it,task mer-ging enables minimum energy consumption and response time.The proposed NMDS is compared with the existing Energy-efficient Dynamic Scheduling scheme(EDS)and Decentralized Virtual Machine Migration(DVM).With a Network Manager-based Dynamic Scheduling,the proposed model achieves excellence in resource allocation compared to the other existing models.The obtained results shows the proposed system effectively allocate the resources and achieves about 94%of energy efficient than the other models.The evaluation metrics taken for comparison are energy consumption,mean response time,percentage of resource utilization,and migration.展开更多
Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of t...Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.展开更多
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci...This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.展开更多
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The...With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.展开更多
With the growing amounts of multi-micro grids,electric vehicles,smart home,smart cities connected to the Power Distribution Internet of Things(PD-IoT)system,greater computing resource and communication bandwidth are r...With the growing amounts of multi-micro grids,electric vehicles,smart home,smart cities connected to the Power Distribution Internet of Things(PD-IoT)system,greater computing resource and communication bandwidth are required for power distribution.It probably leads to extreme service delay and data congestion when a large number of data and business occur in emergence.This paper presents a service scheduling method based on edge computing to balance the business load of PD-IoT.The architecture,components and functional requirements of the PD-IoT with edge computing platform are proposed.Then,the structure of the service scheduling system is presented.Further,a novel load balancing strategy and ant colony algorithm are investigated in the service scheduling method.The validity of the method is evaluated by simulation tests.Results indicate that the mean load balancing ratio is reduced by 99.16%and the optimized offloading links can be acquired within 1.8 iterations.Computing load of the nodes in edge computing platform can be effectively balanced through the service scheduling.展开更多
This paper presents an adaptive gain-scheduled backstepping control(AGSBC) scheme for the balance control of an underactuated mechanical power-line inspection(PLI) robotic system with two degrees of freedom and a sing...This paper presents an adaptive gain-scheduled backstepping control(AGSBC) scheme for the balance control of an underactuated mechanical power-line inspection(PLI) robotic system with two degrees of freedom and a single control input.First, a nonlinear dynamic model of the balance adjustment process of the PLI robot is constructed, and then the model is linearized at a nominal equilibrium point to overcome the computational infeasibility of the conventional backstepping technique. Second, to solve generalized stabilization control issue for underactuated systems with multiple equilibrium points,an equilibrium manifold linearized model is developed using a scheduling variable, and then a gain-scheduled backstepping control(GSBC) scheme for expanding the operational area of the controlled system is constructed. Finally, an adaptive mechanism is proposed to counteract the impact of external disturbances. The robust stability of the closed-loop system is ensured by Lyapunov theorem. Simulation results demonstrate the effectiveness and high performance of the proposed scheme compared with other control schemes.展开更多
Energy saving is one of the most important research hotspots, by which operational expenditure and CO2 emission can be reduced. Optimal cooling capacity scheduling in addition to temperature control can improve energy...Energy saving is one of the most important research hotspots, by which operational expenditure and CO2 emission can be reduced. Optimal cooling capacity scheduling in addition to temperature control can improve energy efficiency. The main contribution of this work is modeling the telecommunication building for the fabric cooling load to schedule the operation of air conditioners. The time series data of the fabric cooling load of the building envelope is taken by simulation by using Energy Plus, Building Control Virtual Test Bed (BCVTB), and Matlab. This pre-computed data and other internal thermal loads are used for scheduling in air conditioners. Energy savings obtained for the whole year are about 4% to 6% by simulation and the field study, respectively.展开更多
The design of controllers for robots is a complex system that is to be dealt with several tasks in real time for enabling the robots to function independently.The distributed robotic control system can be used in real...The design of controllers for robots is a complex system that is to be dealt with several tasks in real time for enabling the robots to function independently.The distributed robotic control system can be used in real time for resolving various challenges such as localization,motion controlling,mapping,route planning,etc.The distributed robotic control system can manage different kinds of heterogenous devices.Designing a distributed robotic control system is a challenging process as it needs to operate effectually under different hardware configurations and varying computational requirements.For instance,scheduling of resources(such as communication channel,computation unit,robot chassis,or sensor input)to the various system components turns out to be an essential requirement for completing the tasks on time.Therefore,resource scheduling is necessary for ensuring effective execution.In this regard,this paper introduces a novel chaotic shell game optimization algorithm(CSGOA)for resource scheduling,known as the CSGOA-RS technique for the distributed robotic control system environment.The CSGOA technique is based on the integration of the chaotic maps concept to the SGO algorithm for enhancing the overall performance.The CSGOA-RS technique is designed for allocating the resources in such a way that the transfer time is minimized and the resource utilization is increased.The CSGOA-RS technique is applicable even for the unpredicted environment where the resources are to be allotted dynamically based on the early estimations.For validating the enhanced performance of the CSGOA-RS technique,a series of simulations have been carried out and the obtained results have been examined with respect to a selected set of measures.The resultant outcomes highlighted the promising performance of the CSGOA-RS technique over the other resource scheduling techniques.展开更多
Cloud computing technology facilitates computing-intensive applications by providing virtualized resources which can be dynamically provisioned. However, user’s requests are varied according to different applications...Cloud computing technology facilitates computing-intensive applications by providing virtualized resources which can be dynamically provisioned. However, user’s requests are varied according to different applications’ computation ability needs. These applications can be presented as meta-job of user’s demand. The total processing time of these jobs may need data transmission time over the Internet as well as the completed time of jobs to execute on the virtual machine must be taken into account. In this paper, we presented V-heuristics scheduling algorithm for allocation of virtualized network and computing resources under user’s constraint which applied into a service-oriented resource broker for jobs scheduling. This scheduling algorithm takes into account both data transmission time and computation time that related to virtualized network and virtual machine. The simulation results are compared with three different types of heuristic algorithms under conventional network or virtual network conditions such as MCT, Min-Min and Max-Min. e evaluate these algorithms within a simulated cloud environment via an abilenenetwork topology which is real physical core network topology. These experimental results show that V-heuristic scheduling algorithm achieved significant performance gain for a variety of applications in terms of load balance, Makespan, average resource utilization and total processing time.展开更多
In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of pa...In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of parallel processing mechanisms.One is that it can evenly allocate tasks to each server node in the cluster and the other is that it can implement the load balancing inside a server node.Based on the strategy,a new web-based spatial computing model is designed in this paper,in which,a task response ratio calculation method,a request queue buffer mechanism and a thread scheduling strategy are focused on.Experimental results show that the new model can fully use the multi-core computing advantage of each server node in the concurrent access environment and improve the average hits per second,average I/O Hits,CPU utilization and throughput.Using speed-up ratio to analyze the traditional model and the new one,the result shows that the new model has the best performance.The performance of the multi-core server nodes in the cluster is optimized;the resource utilization and the parallel processing capabilities are enhanced.The more CPU cores you have,the higher parallel processing capabilities will be obtained.展开更多
基金supported by Key R&D project of Zhejiang Province (2018C01005),http://kjt.zj.gov.cn/.
文摘Numerous clothing enterprises in the market have a relatively low efficiency of assembly line planning due to insufficient optimization of bottleneck stations.As a result,the production efficiency of the enterprise is not high,and the production organization is not up to expectations.Aiming at the problem of flexible process route planning in garment workshops,a multi-object genetic algorithm is proposed to solve the assembly line bal-ance optimization problem and minimize the machine adjustment path.The encoding method adopts the object-oriented path representation method,and the initial population is generated by random topology sorting based on an in-degree selection mechanism.The multi-object genetic algorithm improves the mutation and crossover operations according to the characteristics of the clothing process to avoid the generation of invalid offspring.In the iterative process,the bottleneck station is optimized by reasonable process splitting,and process allocation conforms to the strict limit of the station on the number of machines in order to improve the compilation efficiency.The effectiveness and feasibility of the multi-object genetic algorithm are proven by the analysis of clothing cases.Compared with the artificial allocation process,the compilation efficiency of MOGA is increased by more than 15%and completes the optimization of the minimum machine adjustment path.The results are in line with the expected optimization effect.
基金This research was supported by Energy Cloud R&D Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT(2019M3F2A1073387)this research was supported by Institute for Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2018-0-01456,AutoMaTa:Autonomous Management framework based on artificial intelligent Technology for adaptive and disposable IoT).Any correspondence related to this paper should be addressed to Do-hyeun Kim.Conflicts of Interest:The auth。
文摘In recent times,the evolution of blockchain technology has got huge attention from the research community due to its versatile applications and unique security features.The IoT has shown wide adoption in various applications including smart cities,healthcare,trade,business,etc.Among these applications,fitness applications have been widely considered for smart fitness systems.The users of the fitness system are increasing at a high rate thus the gym providers are constantly extending the fitness facilities.Thus,scheduling such a huge number of requests for fitness exercise is a big challenge.Secondly,the user fitness data is critical thus securing the user fitness data from unauthorized access is also challenging.To overcome these issues,this work proposed a blockchain-based load-balanced task scheduling approach.A thorough analysis has been performed to investigate the applications of IoT in the fitness industry and various scheduling approaches.The proposed scheduling approach aims to schedule the requests of the fitness users in a load-balanced way that maximize the acceptance rate of the users’requests and improve resource utilization.The performance of the proposed task scheduling approach is compared with the state-of-the-art approaches concerning the average resource utilization and task rejection ratio.The obtained results confirm the efficiency of the proposed scheduling approach.For investigating the performance of the blockchain,various experiments are performed using the Hyperledger Caliper concerning latency,throughput,resource utilization.The Solo approach has shown an improvement of 32%and 26%in throughput as compared to Raft and Solo-Raft approaches respectively.The obtained results assert that the proposed architecture is applicable for resource-constrained IoT applications and is extensible for different IoT applications.
基金supported by the National Key Research and Development Program of China(2018YFB1003702)the National Natural Science Foundation of China(61472192)the Scientific and Technological Support Project(Society)of Jiangsu Province(BE2016776)
文摘Large-scale and diverse businesses based on the cloud computing platform bring the heavy network traffic to cloud data centers.However,the unbalanced workload of cloud data center network easily leads to the network congestion,the low resource utilization rate,the long delay,the low reliability,and the low throughput.In order to improve the utilization efficiency and the quality of services(QoS)of cloud system,especially to solve the problem of network congestion,we propose MTSS,a multi-path traffic scheduling mechanism based on software defined networking(SDN).MTSS utilizes the data flow scheduling flexibility of SDN and the multi-path feature of the fat-tree structure to improve the traffic balance of the cloud data center network.A heuristic traffic balancing algorithm is presented for MTSS,which periodically monitors the network link and dynamically adjusts the traffic on the heavy link to achieve programmable data forwarding and load balancing.The experimental results show that MTSS outperforms equal-cost multi-path protocol(ECMP),by effectively reducing the packet loss rate and delay.In addition,MTSS improves the utilization efficiency,the reliability and the throughput rate of the cloud data center network.
文摘Cloud computing distributes task-parallel among the various resources.Applications with self-service supported and on-demand service have rapid growth.For these applications,cloud computing allocates the resources dynami-cally via the internet according to user requirements.Proper resource allocation is vital for fulfilling user requirements.In contrast,improper resource allocations result to load imbalance,which leads to severe service issues.The cloud resources implement internet-connected devices using the protocols for storing,communi-cating,and computations.The extensive needs and lack of optimal resource allo-cating scheme make cloud computing more complex.This paper proposes an NMDS(Network Manager based Dynamic Scheduling)for achieving a prominent resource allocation scheme for the users.The proposed system mainly focuses on dimensionality problems,where the conventional methods fail to address them.The proposed system introduced three–threshold mode of task based on its size STT,MTT,LTT(small,medium,large task thresholding).Along with it,task mer-ging enables minimum energy consumption and response time.The proposed NMDS is compared with the existing Energy-efficient Dynamic Scheduling scheme(EDS)and Decentralized Virtual Machine Migration(DVM).With a Network Manager-based Dynamic Scheduling,the proposed model achieves excellence in resource allocation compared to the other existing models.The obtained results shows the proposed system effectively allocate the resources and achieves about 94%of energy efficient than the other models.The evaluation metrics taken for comparison are energy consumption,mean response time,percentage of resource utilization,and migration.
文摘Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.
基金funded by the Science and Technology Foundation of State Grid Corporation of China(Grant No.5108-202218280A-2-397-XG).
文摘This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.
文摘With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.
基金This work was supported by the National Natural Science Foundation of China(Grant:61702048).
文摘With the growing amounts of multi-micro grids,electric vehicles,smart home,smart cities connected to the Power Distribution Internet of Things(PD-IoT)system,greater computing resource and communication bandwidth are required for power distribution.It probably leads to extreme service delay and data congestion when a large number of data and business occur in emergence.This paper presents a service scheduling method based on edge computing to balance the business load of PD-IoT.The architecture,components and functional requirements of the PD-IoT with edge computing platform are proposed.Then,the structure of the service scheduling system is presented.Further,a novel load balancing strategy and ant colony algorithm are investigated in the service scheduling method.The validity of the method is evaluated by simulation tests.Results indicate that the mean load balancing ratio is reduced by 99.16%and the optimized offloading links can be acquired within 1.8 iterations.Computing load of the nodes in edge computing platform can be effectively balanced through the service scheduling.
文摘This paper presents an adaptive gain-scheduled backstepping control(AGSBC) scheme for the balance control of an underactuated mechanical power-line inspection(PLI) robotic system with two degrees of freedom and a single control input.First, a nonlinear dynamic model of the balance adjustment process of the PLI robot is constructed, and then the model is linearized at a nominal equilibrium point to overcome the computational infeasibility of the conventional backstepping technique. Second, to solve generalized stabilization control issue for underactuated systems with multiple equilibrium points,an equilibrium manifold linearized model is developed using a scheduling variable, and then a gain-scheduled backstepping control(GSBC) scheme for expanding the operational area of the controlled system is constructed. Finally, an adaptive mechanism is proposed to counteract the impact of external disturbances. The robust stability of the closed-loop system is ensured by Lyapunov theorem. Simulation results demonstrate the effectiveness and high performance of the proposed scheme compared with other control schemes.
基金support and facilities provieded by Bharat Sanchar Nigam Limited Chennai Telephones and Department of Telecommunications,India for this study
文摘Energy saving is one of the most important research hotspots, by which operational expenditure and CO2 emission can be reduced. Optimal cooling capacity scheduling in addition to temperature control can improve energy efficiency. The main contribution of this work is modeling the telecommunication building for the fabric cooling load to schedule the operation of air conditioners. The time series data of the fabric cooling load of the building envelope is taken by simulation by using Energy Plus, Building Control Virtual Test Bed (BCVTB), and Matlab. This pre-computed data and other internal thermal loads are used for scheduling in air conditioners. Energy savings obtained for the whole year are about 4% to 6% by simulation and the field study, respectively.
文摘The design of controllers for robots is a complex system that is to be dealt with several tasks in real time for enabling the robots to function independently.The distributed robotic control system can be used in real time for resolving various challenges such as localization,motion controlling,mapping,route planning,etc.The distributed robotic control system can manage different kinds of heterogenous devices.Designing a distributed robotic control system is a challenging process as it needs to operate effectually under different hardware configurations and varying computational requirements.For instance,scheduling of resources(such as communication channel,computation unit,robot chassis,or sensor input)to the various system components turns out to be an essential requirement for completing the tasks on time.Therefore,resource scheduling is necessary for ensuring effective execution.In this regard,this paper introduces a novel chaotic shell game optimization algorithm(CSGOA)for resource scheduling,known as the CSGOA-RS technique for the distributed robotic control system environment.The CSGOA technique is based on the integration of the chaotic maps concept to the SGO algorithm for enhancing the overall performance.The CSGOA-RS technique is designed for allocating the resources in such a way that the transfer time is minimized and the resource utilization is increased.The CSGOA-RS technique is applicable even for the unpredicted environment where the resources are to be allotted dynamically based on the early estimations.For validating the enhanced performance of the CSGOA-RS technique,a series of simulations have been carried out and the obtained results have been examined with respect to a selected set of measures.The resultant outcomes highlighted the promising performance of the CSGOA-RS technique over the other resource scheduling techniques.
文摘Cloud computing technology facilitates computing-intensive applications by providing virtualized resources which can be dynamically provisioned. However, user’s requests are varied according to different applications’ computation ability needs. These applications can be presented as meta-job of user’s demand. The total processing time of these jobs may need data transmission time over the Internet as well as the completed time of jobs to execute on the virtual machine must be taken into account. In this paper, we presented V-heuristics scheduling algorithm for allocation of virtualized network and computing resources under user’s constraint which applied into a service-oriented resource broker for jobs scheduling. This scheduling algorithm takes into account both data transmission time and computation time that related to virtualized network and virtual machine. The simulation results are compared with three different types of heuristic algorithms under conventional network or virtual network conditions such as MCT, Min-Min and Max-Min. e evaluate these algorithms within a simulated cloud environment via an abilenenetwork topology which is real physical core network topology. These experimental results show that V-heuristic scheduling algorithm achieved significant performance gain for a variety of applications in terms of load balance, Makespan, average resource utilization and total processing time.
基金Supported by the China Postdoctoral Science Foundation(No.2014M552115)the Fundamental Research Funds for the Central Universities,ChinaUniversity of Geosciences(Wuhan)(No.CUGL140833)the National Key Technology Support Program of China(No.2011BAH06B04)
文摘In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of parallel processing mechanisms.One is that it can evenly allocate tasks to each server node in the cluster and the other is that it can implement the load balancing inside a server node.Based on the strategy,a new web-based spatial computing model is designed in this paper,in which,a task response ratio calculation method,a request queue buffer mechanism and a thread scheduling strategy are focused on.Experimental results show that the new model can fully use the multi-core computing advantage of each server node in the concurrent access environment and improve the average hits per second,average I/O Hits,CPU utilization and throughput.Using speed-up ratio to analyze the traditional model and the new one,the result shows that the new model has the best performance.The performance of the multi-core server nodes in the cluster is optimized;the resource utilization and the parallel processing capabilities are enhanced.The more CPU cores you have,the higher parallel processing capabilities will be obtained.