Live Virtual Machine(VM)migration is one of the foremost techniques for progressing Cloud Data Centers’(CDC)proficiency as it leads to better resource usage.The workload of CDC is often dynamic in nature,it is better ...Live Virtual Machine(VM)migration is one of the foremost techniques for progressing Cloud Data Centers’(CDC)proficiency as it leads to better resource usage.The workload of CDC is often dynamic in nature,it is better to envisage the upcoming workload for early detection of overload status,underload status and to trigger the migration at an appropriate point wherein enough number of resources are available.Though various statistical and machine learning approaches are widely applied for resource usage prediction,they often failed to handle the increase of non-linear CDC data.To overcome this issue,a novel Hypergrah based Convolutional Deep Bi-Directional-Long Short Term Memory(CDB-LSTM)model is proposed.The CDB-LSTM adopts Helly property of Hypergraph and Savitzky–Golay(SG)filter to select informative samples and exclude noisy inference&outliers.The proposed approach optimizes resource usage prediction and reduces the number of migrations with minimal computa-tional complexity during live VM migration.Further,the proposed prediction approach implements the correlation co-efficient measure to select the appropriate destination server for VM migration.A Hypergraph based CDB-LSTM was vali-dated using Google cluster dataset and compared with state-of-the-art approaches in terms of various evaluation metrics.展开更多
In the big data platform,because of the large amount of data,the problem of load imbalance is prominent.Most of the current load balancing methods have problems such as high data flow loss rate and long response time;...In the big data platform,because of the large amount of data,the problem of load imbalance is prominent.Most of the current load balancing methods have problems such as high data flow loss rate and long response time;therefore,more effective load balancing method is urgently needed.Taking HBase as the research subject,the study analyzed the dynamic load balancing method of data flow.First,the HBase platform was introduced briefly,and then the dynamic load-balancing algorithm was designed.The data flow was divided into blocks,and then the load of nodes was predicted based on the grey prediction GM(1,1)model.Finally,the load was migrated through the dynamic adjustable method to achieve load balancing.The experimental results showed that the accuracy of the method for load prediction was high,the average error percentage was 0.93%,and the average response time was short;under 3000 tasks,the response time of the method designed in this study was 14.17%shorter than that of the method combining TV white space(TVWS)and long-term evolution(LTE);the average flow of nodes with the largest load was also smaller,and the data flow loss rate was basically 0%.The experimental results show the effectiveness of the proposed method,which can be further promoted and applied in practice.展开更多
With the large-scale connection of 5G base stations(BSs)to the distribution networks(DNs),5G BSs are utilized as flexible loads to participate in the peak load regulation,where the BSs can be divided into base station...With the large-scale connection of 5G base stations(BSs)to the distribution networks(DNs),5G BSs are utilized as flexible loads to participate in the peak load regulation,where the BSs can be divided into base station groups(BSGs)to realize inter-district energy transfer.A Stackelberg game-based optimization framework is proposed,where the distribution net-work operator(DNO)works as a leader with dynamic pricing for multi-BSGs;while BSGs serve as followers with the ability of demand response to adjust their charging and discharging strategies in temporal dimension and load migration strategy in spatial dimension.Subsequently,the presence and uniqueness of the Stackelberg equilibrium(SE)are provided.Moreover,differential evolution is adopted to reach the SE and the optimization problem in multi-BSGs is decomposed to solve the time-space coupling.Finally,through simulation of a practical system,the results show that the DNO operation profit is increased via cutting down the peak load and the operation costs for multi-BSGs are reduced,which reaches a winwin effect.展开更多
文摘Live Virtual Machine(VM)migration is one of the foremost techniques for progressing Cloud Data Centers’(CDC)proficiency as it leads to better resource usage.The workload of CDC is often dynamic in nature,it is better to envisage the upcoming workload for early detection of overload status,underload status and to trigger the migration at an appropriate point wherein enough number of resources are available.Though various statistical and machine learning approaches are widely applied for resource usage prediction,they often failed to handle the increase of non-linear CDC data.To overcome this issue,a novel Hypergrah based Convolutional Deep Bi-Directional-Long Short Term Memory(CDB-LSTM)model is proposed.The CDB-LSTM adopts Helly property of Hypergraph and Savitzky–Golay(SG)filter to select informative samples and exclude noisy inference&outliers.The proposed approach optimizes resource usage prediction and reduces the number of migrations with minimal computa-tional complexity during live VM migration.Further,the proposed prediction approach implements the correlation co-efficient measure to select the appropriate destination server for VM migration.A Hypergraph based CDB-LSTM was vali-dated using Google cluster dataset and compared with state-of-the-art approaches in terms of various evaluation metrics.
文摘In the big data platform,because of the large amount of data,the problem of load imbalance is prominent.Most of the current load balancing methods have problems such as high data flow loss rate and long response time;therefore,more effective load balancing method is urgently needed.Taking HBase as the research subject,the study analyzed the dynamic load balancing method of data flow.First,the HBase platform was introduced briefly,and then the dynamic load-balancing algorithm was designed.The data flow was divided into blocks,and then the load of nodes was predicted based on the grey prediction GM(1,1)model.Finally,the load was migrated through the dynamic adjustable method to achieve load balancing.The experimental results showed that the accuracy of the method for load prediction was high,the average error percentage was 0.93%,and the average response time was short;under 3000 tasks,the response time of the method designed in this study was 14.17%shorter than that of the method combining TV white space(TVWS)and long-term evolution(LTE);the average flow of nodes with the largest load was also smaller,and the data flow loss rate was basically 0%.The experimental results show the effectiveness of the proposed method,which can be further promoted and applied in practice.
基金supported by the National Natural Science Foundation of China(No.51877076).
文摘With the large-scale connection of 5G base stations(BSs)to the distribution networks(DNs),5G BSs are utilized as flexible loads to participate in the peak load regulation,where the BSs can be divided into base station groups(BSGs)to realize inter-district energy transfer.A Stackelberg game-based optimization framework is proposed,where the distribution net-work operator(DNO)works as a leader with dynamic pricing for multi-BSGs;while BSGs serve as followers with the ability of demand response to adjust their charging and discharging strategies in temporal dimension and load migration strategy in spatial dimension.Subsequently,the presence and uniqueness of the Stackelberg equilibrium(SE)are provided.Moreover,differential evolution is adopted to reach the SE and the optimization problem in multi-BSGs is decomposed to solve the time-space coupling.Finally,through simulation of a practical system,the results show that the DNO operation profit is increased via cutting down the peak load and the operation costs for multi-BSGs are reduced,which reaches a winwin effect.