Cloud computing distributes task-parallel among the various resources.Applications with self-service supported and on-demand service have rapid growth.For these applications,cloud computing allocates the resources dyn...Cloud computing distributes task-parallel among the various resources.Applications with self-service supported and on-demand service have rapid growth.For these applications,cloud computing allocates the resources dynami-cally via the internet according to user requirements.Proper resource allocation is vital for fulfilling user requirements.In contrast,improper resource allocations result to load imbalance,which leads to severe service issues.The cloud resources implement internet-connected devices using the protocols for storing,communi-cating,and computations.The extensive needs and lack of optimal resource allo-cating scheme make cloud computing more complex.This paper proposes an NMDS(Network Manager based Dynamic Scheduling)for achieving a prominent resource allocation scheme for the users.The proposed system mainly focuses on dimensionality problems,where the conventional methods fail to address them.The proposed system introduced three–threshold mode of task based on its size STT,MTT,LTT(small,medium,large task thresholding).Along with it,task mer-ging enables minimum energy consumption and response time.The proposed NMDS is compared with the existing Energy-efficient Dynamic Scheduling scheme(EDS)and Decentralized Virtual Machine Migration(DVM).With a Network Manager-based Dynamic Scheduling,the proposed model achieves excellence in resource allocation compared to the other existing models.The obtained results shows the proposed system effectively allocate the resources and achieves about 94%of energy efficient than the other models.The evaluation metrics taken for comparison are energy consumption,mean response time,percentage of resource utilization,and migration.展开更多
The weapon transportation support scheduling problem on aircraft carrier deck is the key to restricting the sortie rate and combat capability of carrier-based aircraft.This paper studies the problem and presents a nov...The weapon transportation support scheduling problem on aircraft carrier deck is the key to restricting the sortie rate and combat capability of carrier-based aircraft.This paper studies the problem and presents a novel solution architecture.Taking the interference of the carrier-based aircraft deck layout on the weapon transportation route and precedence constraint into consideration,a mixed integer formulation is established to minimize the total objective,which is constituted of makespan,load variance and accumulative transfer time of support unit.Solution approach is developed for the model.Firstly,based on modeling the carrier aircraft parked on deck as convex obstacles,the path library of weapon transportation is constructed through visibility graph and Warshall-Floyd methods.We then propose a bi-population immune algorithm in which a population-based forward/backward scheduling technique,local search schemes and a chaotic catastrophe operator are embedded.Besides,the randomkey solution representation and serial scheduling generation scheme are adopted to conveniently obtain a better solution.The Taguchi method is additionally employed to determine key parameters of the algorithm.Finally,on a set of generated realistic instances,we demonstrate that the proposed algorithm outperforms all compared algorithms designed for similar optimization problems and can significantly improve the efficiency,and that the established model and the bi-population immune algorithm can effectively respond to the weapon support requirements of carrier-based aircraft under different sortie missions.展开更多
Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time o...Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.展开更多
Micro-UAV swarms usually generate massive data when performing tasks. These data can be harnessed with various machine learning(ML) algorithms to improve the swarm’s intelligence. To achieve this goal while protectin...Micro-UAV swarms usually generate massive data when performing tasks. These data can be harnessed with various machine learning(ML) algorithms to improve the swarm’s intelligence. To achieve this goal while protecting swarm data privacy, federated learning(FL) has been proposed as a promising enabling technology. During the model training process of FL, the UAV may face an energy scarcity issue due to the limited battery capacity. Fortunately, this issue is potential to be tackled via simultaneous wireless information and power transfer(SWIPT). However, the integration of SWIPT and FL brings new challenges to the system design that have yet to be addressed, which motivates our work. Specifically,in this paper, we consider a micro-UAV swarm network consisting of one base station(BS) and multiple UAVs, where the BS uses FL to train an ML model over the data collected by the swarm. During training, the BS broadcasts the model and energy simultaneously to the UAVs via SWIPT, and each UAV relies on its harvested and battery-stored energy to train the received model and then upload it to the BS for model aggregation. To improve the learning performance, we formulate a problem of maximizing the percentage of scheduled UAVs by jointly optimizing UAV scheduling and wireless resource allocation. The problem is a challenging mixed integer nonlinear programming problem and is NP-hard in general. By exploiting its special structure property, we develop two algorithms to achieve the optimal and suboptimal solutions, respectively. Numerical results show that the suboptimal algorithm achieves a near-optimal performance under various network setups, and significantly outperforms the existing representative baselines. considered.展开更多
Long Term Evolution( LTE) has been proposed as an advanced wireless radio access technology to provide higher peak data rates and better spectral utilization efficiency,but the classical scheduling and resource alloca...Long Term Evolution( LTE) has been proposed as an advanced wireless radio access technology to provide higher peak data rates and better spectral utilization efficiency,but the classical scheduling and resource allocation algorithms cannot optimally enhance the system performance due to high computational complexity. In this paper,a re-configurable dual mode delay-aware( CDD) scheduling and resource allocation algorithm is proposed to achieve the joint consideration of scheduling pattern,scheduling priority and quantity of scheduled data. In this study,dual-mode scheduling mechanism is associated with three configurable parameters and the CDD algorithm is involved to guarantee queuing delay with low loss of resource utilization and fairness.The computational cost of the scheduling and resource allocation algorithm is significantly reduced by efficiently utilizing Qo S Class Identifier( QCI) and Channel Quality Indicator( CQI) defined by LTE standards. The simulation results based on different application scenarios also represent the computation cost and complexity of scheduling algorithm along with the improved system throughput.展开更多
Orthogonal Frequency-Division Multiple Access (OFDMA) systems have attracted considerable attention through technologies such as 3GPP Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMA...Orthogonal Frequency-Division Multiple Access (OFDMA) systems have attracted considerable attention through technologies such as 3GPP Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX). OFDMA is a flexible multiple-access technique that can accommodate many users with widely varying applications, data rates, and Quality of Service (QoS) requirements. OFDMA has the advantages of handling lower data rates and bursty traffic at a reduced power compared to single-user OFDM or its Time Division Multiple Access (TDMA) or Carrier Sense Multiple Access (CSMA) counterparts. In our work, we propose a Particle Swarm Optimization based resource allocation and scheduling scheme (PSORAS) with improved quality of service for OFDMA Systems. Simulation results indicate a clear reduction in delay compared to the Frequency Division Multiple Access (FDMA) scheme for resource allocation, at almost the same throughput and fairness. This makes our scheme absolutely suitable for handling real time traffic such real time video-on demand.展开更多
Traditional resource allocation algorithms use the hierarchical system, which does not apply to the bad channel environment in broadband power line communication system. Introducing the idea of cross-layer can improve...Traditional resource allocation algorithms use the hierarchical system, which does not apply to the bad channel environment in broadband power line communication system. Introducing the idea of cross-layer can improve the utilization of resources and ensure the QoS of services. This paper proposes a cross-layer resource allocation on broadband power line based on QoS priority scheduling function on MAC layer. Firstly, the algorithm considers both of real-time users’ requirements for delay and non-real-time users’ requirements for queue length. And then user priority function is proposed. Then each user’s scheduled packets number is calculated according to its priority function. The scheduling sequences are based on the utility function. In physical layer, according to the scheduled packets, the algorithm allocates physical resources for packets. The simulation results show that the proposed algorithm give consideration to both latency and throughput of the system with improving users’ QoS.展开更多
The ubiquitous and deterministic communication systems are becoming indispensable for future vertical applications such as industrial automation systems and smart grids.5G-TSN(Time-Sensitive Networking)integrated netw...The ubiquitous and deterministic communication systems are becoming indispensable for future vertical applications such as industrial automation systems and smart grids.5G-TSN(Time-Sensitive Networking)integrated networks with the 5G system(5GS)as a TSN bridge are promising to provide the required communication service.To guarantee the endto-end(E2E)QoS(Quality of Service)performance of traffic is a great challenge in 5G-TSN integrated networks.A dynamic QoS mapping method is proposed in this paper.It is based on the improved K-means clustering algorithm and the rough set theory(IKCRQM).The IKC-RQM designs a dynamic and loadaware QoS mapping algorithm to improve its flexibility.An adaptive semi-persistent scheduling(ASPS)mechanism is proposed to solve the challenging deterministic scheduling in 5GS.It includes two parts:one part is the persistent resource allocation for timesensitive flows,and the other part is the dynamic resource allocation based on the max-min fair share algorithm.Simulation results show that the proposed IKC-RQM algorithm achieves flexible and appropriate QoS mapping,and the ASPS performs corresponding resource allocations to guarantee the deterministic transmissions of time-sensitive flows in 5G-TSN integrated networks.展开更多
The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In clo...The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.展开更多
This paper investigates the production scheduling problems of allocating resources and sequencing jobs in the seru production system(SPS).As a new-type manufacturing mode arising from Japanese production practices,ser...This paper investigates the production scheduling problems of allocating resources and sequencing jobs in the seru production system(SPS).As a new-type manufacturing mode arising from Japanese production practices,seru production can achieve efficiency,flexibility,and responsiveness simultaneously.The production environment in which a set of jobs must be scheduled over a set of serus according to due date and different execution modes is considered,and a combination optimization model is provided.Motivated by the problem complexity and the characteristics of the proposed seru scheduling model,a nested partitioning method(NPM)is designed as the solution approach.Finally,computational studies are conducted,and the practicability of the proposed seru scheduling model is proven.Moreover,the efficiency of the nested partitioning solution method is demonstrated by the computational results obtained from different scenarios,and the good scalability of the proposed approach is proven via comparative analysis.展开更多
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications...In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.展开更多
Recently the integrated modular avionics (IMA) architecture which introduces the concept of resource partitioning becomes popular as an alternative to the traditional federated architecture. A novel hierarchical app...Recently the integrated modular avionics (IMA) architecture which introduces the concept of resource partitioning becomes popular as an alternative to the traditional federated architecture. A novel hierarchical approach is proposed to solve the resource allocation problem for IMA systems in distributed environments. Firstly, the worst case response time of tasks with arbitrary deadlines is analyzed for the two-level scheduler. Then, the hierarchical resource allocation approach is presented in two levels. At the platform level, a task assignment algorithm based on genetic simulated annealing (GSA) is proposed to assign a set of pre-defined tasks to different processing nodes in the form of task groups, so that resources can be allocated as partitions and mapped to task groups. While yielding to all the resource con- straints, the algorithm tries to find an optimal task assignment with minimized communication costs and balanced work load. At the node level, partition parameters are optimized, so that the computational resource can be allocated further. An example is shown to illustrate the hierarchal resource allocation approach and manifest the validity. Simulation results comparing the performance of the proposed GSA with that of traditional genetic algorithms are presented in the context of task assignment in IMA systems.展开更多
Satellite communication systems provide a cost-effective solution for global internet of things(IoT)applications due to its large coverage and easy deployment.This paper mainly focuses on Satellite networks system,in ...Satellite communication systems provide a cost-effective solution for global internet of things(IoT)applications due to its large coverage and easy deployment.This paper mainly focuses on Satellite networks system,in which low earth orbit(LEO)satellites network collect sensing data from the user terminals(UTs)and then forward the data to ground station through geostationary earth orbit(GEO)satellites network.Considering the limited uplink transmission resources,this paper optimizes the uplink transmission scheduling scheme over LEO satellites.A novel transmission scheduling algorithm,which combined Algorithms of Simulated Annealing and Monte Carlo(SA-MC),is proposed to achieve the dynamic optimal scheduling scheme.Simulation results show the effectiveness of the proposed SA-MC algorithm in terms of cost value reduction and fast convergence.展开更多
Tide is a significant factor which interferes with the berthing and departing operations of vessels in tidal ports. It is a preferable way to incorporate this factor into the simultaneous berth allocation and quay cra...Tide is a significant factor which interferes with the berthing and departing operations of vessels in tidal ports. It is a preferable way to incorporate this factor into the simultaneous berth allocation and quay crane( QC) assignment problem( BACAP) in order to facilitate the realistic decision-making process at container terminal. For this purpose,an integrated optimization model is built with tidal time windows as forbidden intervals for berthing or departing. A hind-and-fore adjustment heuristic is proposed and applied under an iterative optimization framework. Numerical experiment shows the satisfying performance of the proposed algorithm.展开更多
The garment industry in Vietnam is one of the country’s strongest industries in the world.However,the production process still encounters problems regarding scheduling that does not equate to an optimal process.The p...The garment industry in Vietnam is one of the country’s strongest industries in the world.However,the production process still encounters problems regarding scheduling that does not equate to an optimal process.The paper introduces a production scheduling solution that resolves the potential delays and lateness that hinders the production process using integer programming and order allocation with a make-to-order manufacturing viewpoint.A number of constraints were considered in the model and is applied to a real case study of a factory in order to viewhowthe tardiness and latenesswould be affected which resulted in optimizing the scheduling time better.Specifically,the constraints considered were order assignments,production time,and tardiness with an objective function which is to minimize the total cost of delay.The results of the study precisely the overall cost of delay of the orders given to the plant and successfully propose a suitable production schedule that utilizes the most of the plant given.The study has shown promising results that would assist plant and production managers in determining an algorithm that they can apply for their production process.展开更多
With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficienc...With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.展开更多
In order to optimize resource integration and optimal scheduling problems in the cloud manufacturing environment,this paper proposes to use load balancing,service cost and service quality as optimization goals for res...In order to optimize resource integration and optimal scheduling problems in the cloud manufacturing environment,this paper proposes to use load balancing,service cost and service quality as optimization goals for resource scheduling,however,resource providers have resource utilization requirements for cloud manufacturing platforms.In the process of resource optimization scheduling,the interests of all parties have conflicts of interest,which makes it impossible to obtain better optimization results for resource scheduling.Therefore,amultithreaded auto-negotiation method based on the Stackelberg game is proposed to resolve conflicts of interest in the process of resource scheduling.The cloud manufacturing platform first calculates the expected value reduction plan for each round of global optimization,using the negotiation algorithm based on the Stackelberg game,the cloud manufacturing platformnegotiates andmediateswith the participants’agents,to maximize self-interest by constantly changing one’s own plan,iteratively find multiple sets of locally optimized negotiation plans and return to the cloud manufacturing platform.Through multiple rounds of negotiation and calculation,we finally get a target expected value reduction plan that takes into account the benefits of the resource provider and the overall benefits of the completion of the manufacturing task.Finally,through experimental simulation and comparative analysis,the validity and rationality of the model are verified.展开更多
In order to improve the transmission accuracy and efficiency of sensing and actuating signals in Internet of Things(IoT) and ensure the system stability,an adaptive resource allocation algorithm is proposed,which dyna...In order to improve the transmission accuracy and efficiency of sensing and actuating signals in Internet of Things(IoT) and ensure the system stability,an adaptive resource allocation algorithm is proposed,which dynamically assigns the network bandwidth and priority among components according to their signals' frequency domain characteristics.A remote sensed and controlled unmanned ground vehicle(UGV) path tracking test-bed was developed and multiple UGV's tracking error signals were measured in the simulation for performance evaluation.Results show that with the same network bandwidth constraints,the proposed algorithm can reduce the accumulated and maximum errors of UGV path tracking by over 60% compared with the conventional static algorithm.展开更多
A new approximation of fair queuing called Compensating Round Robin (CRR)is presented in this paper. The algorithm uses packet-by-packet scheduler with a compensating measure. It achieves good fairness in terms of thr...A new approximation of fair queuing called Compensating Round Robin (CRR)is presented in this paper. The algorithm uses packet-by-packet scheduler with a compensating measure. It achieves good fairness in terms of throughput, requires only O(1) time complexity to process a packet, and is simple enough to be implemented in hardware. After the performances are analyzed, the fairness and packet loss rate of the algorithm are simulated. Simulation results show that the CRR can effectively isolate the effects of contending sources.展开更多
To cope with the task scheduling problem under multi-task and transportation consideration in large-scale service oriented manufacturing systems(SOMS), a service allocation optimization mathematical model was establis...To cope with the task scheduling problem under multi-task and transportation consideration in large-scale service oriented manufacturing systems(SOMS), a service allocation optimization mathematical model was established, and then a hybrid discrete particle swarm optimization-genetic algorithm(HDPSOGA) was proposed. In SOMS, each resource involved in the whole life cycle of a product, whether it is provided by a piece of software or a hardware device, is encapsulated into a service. So, the transportation during production of a task should be taken into account because the hard-services selected are possibly provided by various providers in different areas. In the service allocation optimization mathematical model, multi-task and transportation were considered simultaneously. In the proposed HDPSOGA algorithm, integer coding method was applied to establish the mapping between the particle location matrix and the service allocation scheme. The position updating process was performed according to the cognition part, the social part, and the previous velocity and position while introducing the crossover and mutation idea of genetic algorithm to fit the discrete space. Finally, related simulation experiments were carried out to compare with other two previous algorithms. The results indicate the effectiveness and efficiency of the proposed hybrid algorithm.展开更多
文摘Cloud computing distributes task-parallel among the various resources.Applications with self-service supported and on-demand service have rapid growth.For these applications,cloud computing allocates the resources dynami-cally via the internet according to user requirements.Proper resource allocation is vital for fulfilling user requirements.In contrast,improper resource allocations result to load imbalance,which leads to severe service issues.The cloud resources implement internet-connected devices using the protocols for storing,communi-cating,and computations.The extensive needs and lack of optimal resource allo-cating scheme make cloud computing more complex.This paper proposes an NMDS(Network Manager based Dynamic Scheduling)for achieving a prominent resource allocation scheme for the users.The proposed system mainly focuses on dimensionality problems,where the conventional methods fail to address them.The proposed system introduced three–threshold mode of task based on its size STT,MTT,LTT(small,medium,large task thresholding).Along with it,task mer-ging enables minimum energy consumption and response time.The proposed NMDS is compared with the existing Energy-efficient Dynamic Scheduling scheme(EDS)and Decentralized Virtual Machine Migration(DVM).With a Network Manager-based Dynamic Scheduling,the proposed model achieves excellence in resource allocation compared to the other existing models.The obtained results shows the proposed system effectively allocate the resources and achieves about 94%of energy efficient than the other models.The evaluation metrics taken for comparison are energy consumption,mean response time,percentage of resource utilization,and migration.
基金the financial support of the National Natural Science Foundation of China(No.52102453)。
文摘The weapon transportation support scheduling problem on aircraft carrier deck is the key to restricting the sortie rate and combat capability of carrier-based aircraft.This paper studies the problem and presents a novel solution architecture.Taking the interference of the carrier-based aircraft deck layout on the weapon transportation route and precedence constraint into consideration,a mixed integer formulation is established to minimize the total objective,which is constituted of makespan,load variance and accumulative transfer time of support unit.Solution approach is developed for the model.Firstly,based on modeling the carrier aircraft parked on deck as convex obstacles,the path library of weapon transportation is constructed through visibility graph and Warshall-Floyd methods.We then propose a bi-population immune algorithm in which a population-based forward/backward scheduling technique,local search schemes and a chaotic catastrophe operator are embedded.Besides,the randomkey solution representation and serial scheduling generation scheme are adopted to conveniently obtain a better solution.The Taguchi method is additionally employed to determine key parameters of the algorithm.Finally,on a set of generated realistic instances,we demonstrate that the proposed algorithm outperforms all compared algorithms designed for similar optimization problems and can significantly improve the efficiency,and that the established model and the bi-population immune algorithm can effectively respond to the weapon support requirements of carrier-based aircraft under different sortie missions.
基金supported by the National Natural Science Foundation of China(6120235461272422)the Scientific and Technological Support Project(Industry)of Jiangsu Province(BE2011189)
文摘Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.
基金supported by the National Natural Science Foundation of China (No. 61971077)the Natural Science Foundation of Chongqing, China (No. cstc2021jcyj-msxmX0458)+3 种基金the open research fund of National Mobile Communications Research Laboratory, Southeast University (No. 2022D06)the Fundamental Research Funds for the Central Universities (No. 2020CDCGTX074)the Natural Science Foundation on Frontier Leading Technology Basic Research Project of Jiangsu (No. BK20212001)the Natural Science Research Project of Jiangsu Higher Education Institutions (No. 21KJB510034)。
文摘Micro-UAV swarms usually generate massive data when performing tasks. These data can be harnessed with various machine learning(ML) algorithms to improve the swarm’s intelligence. To achieve this goal while protecting swarm data privacy, federated learning(FL) has been proposed as a promising enabling technology. During the model training process of FL, the UAV may face an energy scarcity issue due to the limited battery capacity. Fortunately, this issue is potential to be tackled via simultaneous wireless information and power transfer(SWIPT). However, the integration of SWIPT and FL brings new challenges to the system design that have yet to be addressed, which motivates our work. Specifically,in this paper, we consider a micro-UAV swarm network consisting of one base station(BS) and multiple UAVs, where the BS uses FL to train an ML model over the data collected by the swarm. During training, the BS broadcasts the model and energy simultaneously to the UAVs via SWIPT, and each UAV relies on its harvested and battery-stored energy to train the received model and then upload it to the BS for model aggregation. To improve the learning performance, we formulate a problem of maximizing the percentage of scheduled UAVs by jointly optimizing UAV scheduling and wireless resource allocation. The problem is a challenging mixed integer nonlinear programming problem and is NP-hard in general. By exploiting its special structure property, we develop two algorithms to achieve the optimal and suboptimal solutions, respectively. Numerical results show that the suboptimal algorithm achieves a near-optimal performance under various network setups, and significantly outperforms the existing representative baselines. considered.
文摘Long Term Evolution( LTE) has been proposed as an advanced wireless radio access technology to provide higher peak data rates and better spectral utilization efficiency,but the classical scheduling and resource allocation algorithms cannot optimally enhance the system performance due to high computational complexity. In this paper,a re-configurable dual mode delay-aware( CDD) scheduling and resource allocation algorithm is proposed to achieve the joint consideration of scheduling pattern,scheduling priority and quantity of scheduled data. In this study,dual-mode scheduling mechanism is associated with three configurable parameters and the CDD algorithm is involved to guarantee queuing delay with low loss of resource utilization and fairness.The computational cost of the scheduling and resource allocation algorithm is significantly reduced by efficiently utilizing Qo S Class Identifier( QCI) and Channel Quality Indicator( CQI) defined by LTE standards. The simulation results based on different application scenarios also represent the computation cost and complexity of scheduling algorithm along with the improved system throughput.
文摘Orthogonal Frequency-Division Multiple Access (OFDMA) systems have attracted considerable attention through technologies such as 3GPP Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX). OFDMA is a flexible multiple-access technique that can accommodate many users with widely varying applications, data rates, and Quality of Service (QoS) requirements. OFDMA has the advantages of handling lower data rates and bursty traffic at a reduced power compared to single-user OFDM or its Time Division Multiple Access (TDMA) or Carrier Sense Multiple Access (CSMA) counterparts. In our work, we propose a Particle Swarm Optimization based resource allocation and scheduling scheme (PSORAS) with improved quality of service for OFDMA Systems. Simulation results indicate a clear reduction in delay compared to the Frequency Division Multiple Access (FDMA) scheme for resource allocation, at almost the same throughput and fairness. This makes our scheme absolutely suitable for handling real time traffic such real time video-on demand.
文摘Traditional resource allocation algorithms use the hierarchical system, which does not apply to the bad channel environment in broadband power line communication system. Introducing the idea of cross-layer can improve the utilization of resources and ensure the QoS of services. This paper proposes a cross-layer resource allocation on broadband power line based on QoS priority scheduling function on MAC layer. Firstly, the algorithm considers both of real-time users’ requirements for delay and non-real-time users’ requirements for queue length. And then user priority function is proposed. Then each user’s scheduled packets number is calculated according to its priority function. The scheduling sequences are based on the utility function. In physical layer, according to the scheduled packets, the algorithm allocates physical resources for packets. The simulation results show that the proposed algorithm give consideration to both latency and throughput of the system with improving users’ QoS.
基金supported by National Key Research and Development Project under Grant No.2020YFB1710900Sichuan International Cooperation Project of Science and Technology Innovation under Grant No.2022YFH0022。
文摘The ubiquitous and deterministic communication systems are becoming indispensable for future vertical applications such as industrial automation systems and smart grids.5G-TSN(Time-Sensitive Networking)integrated networks with the 5G system(5GS)as a TSN bridge are promising to provide the required communication service.To guarantee the endto-end(E2E)QoS(Quality of Service)performance of traffic is a great challenge in 5G-TSN integrated networks.A dynamic QoS mapping method is proposed in this paper.It is based on the improved K-means clustering algorithm and the rough set theory(IKCRQM).The IKC-RQM designs a dynamic and loadaware QoS mapping algorithm to improve its flexibility.An adaptive semi-persistent scheduling(ASPS)mechanism is proposed to solve the challenging deterministic scheduling in 5GS.It includes two parts:one part is the persistent resource allocation for timesensitive flows,and the other part is the dynamic resource allocation based on the max-min fair share algorithm.Simulation results show that the proposed IKC-RQM algorithm achieves flexible and appropriate QoS mapping,and the ASPS performs corresponding resource allocations to guarantee the deterministic transmissions of time-sensitive flows in 5G-TSN integrated networks.
文摘The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.
基金This research was sponsored by National Natural Science Foundation of China(Grant No.71401075,71801129)the Fundamental Research Funds for the Central Universities(No.30922011406)+1 种基金System Science and Enterprise Development Research Center(Grant No.Xq22B06)Grant-in-Aid for Scientific Research(C)of Japan(Grant No.20K01897).
文摘This paper investigates the production scheduling problems of allocating resources and sequencing jobs in the seru production system(SPS).As a new-type manufacturing mode arising from Japanese production practices,seru production can achieve efficiency,flexibility,and responsiveness simultaneously.The production environment in which a set of jobs must be scheduled over a set of serus according to due date and different execution modes is considered,and a combination optimization model is provided.Motivated by the problem complexity and the characteristics of the proposed seru scheduling model,a nested partitioning method(NPM)is designed as the solution approach.Finally,computational studies are conducted,and the practicability of the proposed seru scheduling model is proven.Moreover,the efficiency of the nested partitioning solution method is demonstrated by the computational results obtained from different scenarios,and the good scalability of the proposed approach is proven via comparative analysis.
基金This work was supported in part by the National Science and Technology Council of Taiwan,under Contract NSTC 112-2410-H-324-001-MY2.
文摘In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
基金supported by the National Natural Science Foundation of China (60879024)
文摘Recently the integrated modular avionics (IMA) architecture which introduces the concept of resource partitioning becomes popular as an alternative to the traditional federated architecture. A novel hierarchical approach is proposed to solve the resource allocation problem for IMA systems in distributed environments. Firstly, the worst case response time of tasks with arbitrary deadlines is analyzed for the two-level scheduler. Then, the hierarchical resource allocation approach is presented in two levels. At the platform level, a task assignment algorithm based on genetic simulated annealing (GSA) is proposed to assign a set of pre-defined tasks to different processing nodes in the form of task groups, so that resources can be allocated as partitions and mapped to task groups. While yielding to all the resource con- straints, the algorithm tries to find an optimal task assignment with minimized communication costs and balanced work load. At the node level, partition parameters are optimized, so that the computational resource can be allocated further. An example is shown to illustrate the hierarchal resource allocation approach and manifest the validity. Simulation results comparing the performance of the proposed GSA with that of traditional genetic algorithms are presented in the context of task assignment in IMA systems.
文摘Satellite communication systems provide a cost-effective solution for global internet of things(IoT)applications due to its large coverage and easy deployment.This paper mainly focuses on Satellite networks system,in which low earth orbit(LEO)satellites network collect sensing data from the user terminals(UTs)and then forward the data to ground station through geostationary earth orbit(GEO)satellites network.Considering the limited uplink transmission resources,this paper optimizes the uplink transmission scheduling scheme over LEO satellites.A novel transmission scheduling algorithm,which combined Algorithms of Simulated Annealing and Monte Carlo(SA-MC),is proposed to achieve the dynamic optimal scheduling scheme.Simulation results show the effectiveness of the proposed SA-MC algorithm in terms of cost value reduction and fast convergence.
基金National Natural Science Foundations of China(Nos.70771065,71171130,61473211,71502129)
文摘Tide is a significant factor which interferes with the berthing and departing operations of vessels in tidal ports. It is a preferable way to incorporate this factor into the simultaneous berth allocation and quay crane( QC) assignment problem( BACAP) in order to facilitate the realistic decision-making process at container terminal. For this purpose,an integrated optimization model is built with tidal time windows as forbidden intervals for berthing or departing. A hind-and-fore adjustment heuristic is proposed and applied under an iterative optimization framework. Numerical experiment shows the satisfying performance of the proposed algorithm.
文摘The garment industry in Vietnam is one of the country’s strongest industries in the world.However,the production process still encounters problems regarding scheduling that does not equate to an optimal process.The paper introduces a production scheduling solution that resolves the potential delays and lateness that hinders the production process using integer programming and order allocation with a make-to-order manufacturing viewpoint.A number of constraints were considered in the model and is applied to a real case study of a factory in order to viewhowthe tardiness and latenesswould be affected which resulted in optimizing the scheduling time better.Specifically,the constraints considered were order assignments,production time,and tardiness with an objective function which is to minimize the total cost of delay.The results of the study precisely the overall cost of delay of the orders given to the plant and successfully propose a suitable production schedule that utilizes the most of the plant given.The study has shown promising results that would assist plant and production managers in determining an algorithm that they can apply for their production process.
基金supported by the Social Science Foundation of Hebei Province(No.HB19JL007)the Education technology Foundation of the Ministry of Education(No.2017A01020)the Natural Science Foundation of Hebei Province(F2021207005).
文摘With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.
基金Project was supported by the special projects for the central government to guide the development of local science and technology(ZY20B11).
文摘In order to optimize resource integration and optimal scheduling problems in the cloud manufacturing environment,this paper proposes to use load balancing,service cost and service quality as optimization goals for resource scheduling,however,resource providers have resource utilization requirements for cloud manufacturing platforms.In the process of resource optimization scheduling,the interests of all parties have conflicts of interest,which makes it impossible to obtain better optimization results for resource scheduling.Therefore,amultithreaded auto-negotiation method based on the Stackelberg game is proposed to resolve conflicts of interest in the process of resource scheduling.The cloud manufacturing platform first calculates the expected value reduction plan for each round of global optimization,using the negotiation algorithm based on the Stackelberg game,the cloud manufacturing platformnegotiates andmediateswith the participants’agents,to maximize self-interest by constantly changing one’s own plan,iteratively find multiple sets of locally optimized negotiation plans and return to the cloud manufacturing platform.Through multiple rounds of negotiation and calculation,we finally get a target expected value reduction plan that takes into account the benefits of the resource provider and the overall benefits of the completion of the manufacturing task.Finally,through experimental simulation and comparative analysis,the validity and rationality of the model are verified.
基金Supported by Natural Science Foundation of Tianjin (No. 07JCZDJC05800)Science and Technology Supporting Plan of Tianjin (No. 09ZCKFGX29200)
文摘In order to improve the transmission accuracy and efficiency of sensing and actuating signals in Internet of Things(IoT) and ensure the system stability,an adaptive resource allocation algorithm is proposed,which dynamically assigns the network bandwidth and priority among components according to their signals' frequency domain characteristics.A remote sensed and controlled unmanned ground vehicle(UGV) path tracking test-bed was developed and multiple UGV's tracking error signals were measured in the simulation for performance evaluation.Results show that with the same network bandwidth constraints,the proposed algorithm can reduce the accumulated and maximum errors of UGV path tracking by over 60% compared with the conventional static algorithm.
文摘A new approximation of fair queuing called Compensating Round Robin (CRR)is presented in this paper. The algorithm uses packet-by-packet scheduler with a compensating measure. It achieves good fairness in terms of throughput, requires only O(1) time complexity to process a packet, and is simple enough to be implemented in hardware. After the performances are analyzed, the fairness and packet loss rate of the algorithm are simulated. Simulation results show that the CRR can effectively isolate the effects of contending sources.
基金Project(2012B091100444)supported by the Production,Education and Research Cooperative Program of Guangdong Province and Ministry of Education,ChinaProject(2013ZM0091)supported by Fundamental Research Funds for the Central Universities of China
文摘To cope with the task scheduling problem under multi-task and transportation consideration in large-scale service oriented manufacturing systems(SOMS), a service allocation optimization mathematical model was established, and then a hybrid discrete particle swarm optimization-genetic algorithm(HDPSOGA) was proposed. In SOMS, each resource involved in the whole life cycle of a product, whether it is provided by a piece of software or a hardware device, is encapsulated into a service. So, the transportation during production of a task should be taken into account because the hard-services selected are possibly provided by various providers in different areas. In the service allocation optimization mathematical model, multi-task and transportation were considered simultaneously. In the proposed HDPSOGA algorithm, integer coding method was applied to establish the mapping between the particle location matrix and the service allocation scheme. The position updating process was performed according to the cognition part, the social part, and the previous velocity and position while introducing the crossover and mutation idea of genetic algorithm to fit the discrete space. Finally, related simulation experiments were carried out to compare with other two previous algorithms. The results indicate the effectiveness and efficiency of the proposed hybrid algorithm.