The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections an...The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.展开更多
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The...With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.展开更多
Objective:To observe the clinical effects of detail-oriented nursing in health checkups at a health management center.Methods:A total of 240 individuals undergoing health checkups at the hospital’s health management ...Objective:To observe the clinical effects of detail-oriented nursing in health checkups at a health management center.Methods:A total of 240 individuals undergoing health checkups at the hospital’s health management center from June 2023 to June 2024 were enrolled and randomly divided into two groups according to a random number table method.The control group received routine nursing care,while the observation group received detail-oriented nursing care,with 120 cases in each group.Differences in checkup quality and nursing risk incidence were compared.Results:The form submission rate,project completion rate,and one-time checkup completion rate in the observation group were higher than those in the control group,while the checkup time was shorter(P<0.05).The incidence of nursing risks such as item loss,falls,and patient-nurse disputes was lower in the observation group compared to the control group(P<0.05).Conclusion:Applying detail-oriented nursing in health checkups at a health management center can effectively improve checkup quality and reduce the occurrence of nursing risks.展开更多
The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by...The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies.展开更多
The rapid development of urbanization requires land management business should change the former single systematic pattern, and advance to integration of functions and data sharing. In order to meets the requirement, ...The rapid development of urbanization requires land management business should change the former single systematic pattern, and advance to integration of functions and data sharing. In order to meets the requirement, this paper presents a new thinking for land management pattern, and management tools of data center for integration of urban and rural areas. The tools were based on MapGIS, which have made the management of multi-subjects, multi-areas, multi-sources and multi-measurement data possible. The techniques of this system are designed accord with national related standard. Experimental result shows that the tools have obvious technical advantage in land resource business integration management.展开更多
The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the sca...The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.展开更多
To provide scientific management basis for the garden planning, project construction, maintenance, social service, this paper prompted that the urban gardening administration sectors need to construct gardening inform...To provide scientific management basis for the garden planning, project construction, maintenance, social service, this paper prompted that the urban gardening administration sectors need to construct gardening information management system. On the basis of fully requirements analysis of gardening sectors, this paper discussed the key technology for system construction. It also proposed to flexibly and smartly build up the system by using the secondary development design environment and running environment based on data center integration development platform. This system greatly helps the daily management and plays very important role in improving urban ecological environment and investment environment.展开更多
In recent years,dual-homed topologies have appeared in data centers in order to offer higher aggregate bandwidth by using multiple paths simultaneously.Multipath TCP(MPTCP) has been proposed as a replacement for TCP i...In recent years,dual-homed topologies have appeared in data centers in order to offer higher aggregate bandwidth by using multiple paths simultaneously.Multipath TCP(MPTCP) has been proposed as a replacement for TCP in those topologies as it can efficiently offer improved throughput and better fairness.However,we have found that MPTCP has a problem in terms of incast collapse where the receiver suffers a drastic goodput drop when it simultaneously requests data over multiple servers.In this paper,we investigate why the goodput collapses even if MPTCP is able to actively relieve hot spots.In order to address the problem,we propose an equally-weighted congestion control algorithm for MPTCP,namely EW-MPTCP,without need for centralized control,additional infrastructure and a hardware upgrade.In our scheme,in addition to the coupled congestion control performed on each subflow of an MPTCP connection,we allow each subflow to perform an additional congestion control operation by weighting the congestion window in reverse proportion to the number of servers.The goal is to mitigate incast collapse by allowing multiple MPTCP subflows to compete fairly with a single-TCP flow at the shared bottleneck.The simulation results show that our solution mitigates the incast problem and noticeably improves goodput in data centers.展开更多
With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers....With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers.Globally,data centers will become the world’s largest users of energy consumption,with the ratio rising from 3%in 2017 to 4.5%in 2025.Due to its unique climate and energy-saving advantages,the high-latitude area in the Pan-Arctic region has gradually become a hotspot for data center site selection in recent years.In order to predict and analyze the future energy consumption and carbon emissions of global data centers,this paper presents a new method based on global data center traffic and power usage effectiveness(PUE)for energy consumption prediction.Firstly,global data center traffic growth is predicted based on the Cisco’s research.Secondly,the dynamic global average PUE and the high latitude PUE based on Romonet simulation model are obtained,and then global data center energy consumption with two different scenarios,the decentralized scenario and the centralized scenario,is analyzed quantitatively via the polynomial fitting method.The simulation results show that,in 2030,the global data center energy consumption and carbon emissions are reduced by about 301 billion kWh and 720 million tons CO2 in the centralized scenario compared with that of the decentralized scenario,which confirms that the establishment of data centers in the Pan-Arctic region in the future can effectively relief the climate change and energy problems.This study provides support for global energy consumption prediction,and guidance for the layout of future global data centers from the perspective of energy consumption.Moreover,it provides support of the feasibility of the integration of energy and information networks under the Global Energy Interconnection conception.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.Howeve...With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.However,traditional TCPs are ill-suited to such situations and always result in the inefficiency(e.g.missing the flow deadline,inevitable throughput collapse)of data transfers.This further degrades the user-perceived quality of service(QoS)in data centers.To reduce the flow completion time of mice and deadline-sensitive flows along with promoting the throughput of elephant flows,an efficient and deadline-aware priority-driven congestion control(PCC)protocol,which grants mice and deadline-sensitive flows the highest priority,is proposed in this paper.Specifically,PCC computes the priority of different flows according to the size of transmitted data,the remaining data volume,and the flows’deadline.Then PCC adjusts the congestion window according to the flow priority and the degree of network congestion.Furthermore,switches in data centers control the input/output of packets based on the flow priority and the queue length.Different from existing TCPs,to speed up the data transfers of mice and deadline-sensitive flows,PCC provides an effective method to compute and encode the flow priority explicitly.According to the flow priority,switches can manage packets efficiently and ensure the data transfers of high priority flows through a weighted priority scheduling with minor modification.The experimental results prove that PCC can improve the data transfer performance of mice and deadline-sensitive flows while guaranting the throughput of elephant flows.展开更多
Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoele...Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoelectronics transceivers in DCs,as well as the advantages of silicon photonic chips fabricated by complementary metal oxide semiconductor process.We also summarize the research on the main components in silicon photonic transceivers.In particular,quantum dot lasers have shown great potential as light sources for silicon photonic integration—whether to adopt bonding method or monolithic integration—thanks to their unique advantages over the conventional quantum-well counterparts.Some of the solutions for highspeed optical interconnection in DCs are then discussed.Among them,wavelength division multiplexing and four-level pulseamplitude modulation have been widely studied and applied.At present,the application of coherent optical communication technology has moved from the backbone network,to the metro network,and then to DCs.展开更多
Virtualization is a common technology for resource sharing in data center. To make efficient use of data center resources, the key challenge is to map customer demands (modeled as virtual data center, VDC) to the ph...Virtualization is a common technology for resource sharing in data center. To make efficient use of data center resources, the key challenge is to map customer demands (modeled as virtual data center, VDC) to the physical data center effectively. In this paper, we focus on this problem. Distinct with previous works, our study of VDC embedding problem is under the assumption that switch resource is the bottleneck of data center networks (DCNs). To this end, we not only propose relative cost to evaluate embedding strategy, decouple embedding problem into VM placement with marginal resource assignment and virtual link mapping with decided source-destination based on the property of fat-tree, but also design the traffic aware embedding algorithm (TAE) and first fit virtual link mapping (FFLM) to map virtual data center requests to a physical data center. Simulation results show that TAE+FFLM could increase acceptance rate and reduce network cost (about 49% in the case) at the same time. The traffie aware embedding algorithm reduces the load of core-link traffic and brings the optimization opportunity for data center network energy conservation.展开更多
We consider differentiated timecritical task scheduling in a N×N input queued optical packet s w itch to ens ure 100% throughput and meet different delay requirements among various modules of data center. Existin...We consider differentiated timecritical task scheduling in a N×N input queued optical packet s w itch to ens ure 100% throughput and meet different delay requirements among various modules of data center. Existing schemes either consider slot-by-slot scheduling with queue depth serving as the delay metric or assume that each input-output connection has the same delay bound in the batch scheduling mode. The former scheme neglects the effect of reconfiguration overhead, which may result in crippled system performance, while the latter cannot satisfy users' differentiated Quality of Service(Qo S) requirements. To make up these deficiencies, we propose a new batch scheduling scheme to meet the various portto-port delay requirements in a best-effort manner. Moreover, a speedup is considered to compensate for both the reconfiguration overhead and the unavoidable slots wastage in the switch fabric. With traffic matrix and delay constraint matrix given, this paper proposes two heuristic algorithms Stringent Delay First(SDF) and m-order SDF(m-SDF) to realize the 100% packet switching, while maximizing the delay constraints satisfaction ratio. The performance of our scheme is verified by extensive numerical simulations.展开更多
The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which...The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which may fail and affect the quality of service.Failure prediction is an important means of ensuring service availability.Predicting node failure in cloud-based data centers is challenging because the failure symptoms reflected have complex characteristics,and the distribution imbalance between the failure sample and the normal sample is widespread,resulting in inaccurate failure prediction.Targeting these challenges,this paper proposes a novel failure prediction method FP-STE(Failure Prediction based on Spatio-temporal Feature Extraction).Firstly,an improved recurrent neural network HW-GRU(Improved GRU based on HighWay network)and a convolutional neural network CNN are used to extract the temporal features and spatial features of multivariate data respectively to increase the discrimination of different types of failure symptoms which improves the accuracy of prediction.Then the intermediate results of the two models are added as features into SCSXGBoost to predict the possibility and the precise type of node failure in the future.SCS-XGBoost is an ensemble learning model that is improved by the integrated strategy of oversampling and cost-sensitive learning.Experimental results based on real data sets confirm the effectiveness and superiority of FP-STE.展开更多
New and emerging use cases, such as the interconnection of geographically distributed data centers(DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and hete...New and emerging use cases, such as the interconnection of geographically distributed data centers(DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and heterogeneous optical network domains. This heterogeneity is, not only due to the diverse data transmission and switching technologies, but also due to the different options of control plane techniques. In light of this, the problem of heterogeneous control plane interworking needs to be solved, and in particular, the solution must address the specific issues of multi-domain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints. In this article, some of the recent activities regarding the Software-Defined Networking(SDN) orchestration are reviewed to address such a multi-domain control plane interworking problem. Specifically, three different models, including the single SDN controller model, multiple SDN controllers in mesh, and multiple SDN controllers in a hierarchical setting, are presented for the DC interconnection network with multiple SDN/Open Flow domains or multiple Open Flow/Generalized Multi-Protocol Label Switching( GMPLS) heterogeneous domains. I n addition, two concrete implementations of the orchestration architectures are detailed, showing the overall feasibility and procedures of SDN orchestration for the end-to-endservice provisioning in multi-domain data center optical networks.展开更多
An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid in...An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid integration technology. Multimode output waveguides in the silica AWG with 2% refractive index difference are used to obtain fiat-top spectra. The output waveguide facet is polished to 45° bevel to change the light propagation direction into the mesa-type PIN PD, which simplifies the packaging process. The experimentM results show that the single channel I dB bandwidth of AWG ranges from 2.12nm to 3.06nm, the ROSA responsivity ranges from 0.097 A/W to 0.158A/W, and the 3dB bandwidth is up to 11 GHz. It is promising to be applied in the eight-lane WDM transmission system in data center interconnection.展开更多
Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling ...Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling is responsible for 40%of the usage.Therefore,this research proposes the immersion cooling method to solving the high energy consumption of data centers by cooling its component using two types of dielectric fluids.Four stages of experimentalmethods are used,such as fluid types,cooling effectiveness,optimization,and durability.Furthermore,benchmark software is used to measure the CPU maximum work with the temperature data performed for 24 h.The results of this study show that the immersion cooling reduces 13℃ lower temperature than the conventional cooling method which means it saves more energy consumption in the data center.The most optimum variable used to decrease the temperature is 1.5 lpm of flow rate and 800 rpm of fan rotation.Furthermore,the cooling performance of the dielectric fluids shows that the mineral oil(MO)is better than the virgin coconut oil(VCO).In durability experiment,there are no components damage after five months immersed in the fluid.展开更多
Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory...Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory and network bandwidth. As cloud computing allows uncoordinated and heterogeneous users to share a data center, competition for multiple resources has become increasingly severe. Motivated by the differences on integrated utilization obtained from different packing schemes, in this paper we take the scheduling problem as a multi-dimensional combinatorial optimization problem with constraint satisfaction. With NP hardness, we present Multiple attribute decision based Integrated Resource Scheduling (MIRS), and a novel heuristic algorithm to gain the approximate optimal solution. Refers to simulation results, in face of various workload sets, our algorithm has significant superiorities in terms of efficiency and performance compared with previous methods.展开更多
1 Introduction The history of data centers can be traced back to the 1960s. Early data centers were deployed on main- frames that were time-shared by users via remote terminals. The boom in data centers came duringthe...1 Introduction The history of data centers can be traced back to the 1960s. Early data centers were deployed on main- frames that were time-shared by users via remote terminals. The boom in data centers came duringthe internet era. Many companies started building large inter- net-connected facililies,展开更多
文摘The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.
文摘With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.
文摘Objective:To observe the clinical effects of detail-oriented nursing in health checkups at a health management center.Methods:A total of 240 individuals undergoing health checkups at the hospital’s health management center from June 2023 to June 2024 were enrolled and randomly divided into two groups according to a random number table method.The control group received routine nursing care,while the observation group received detail-oriented nursing care,with 120 cases in each group.Differences in checkup quality and nursing risk incidence were compared.Results:The form submission rate,project completion rate,and one-time checkup completion rate in the observation group were higher than those in the control group,while the checkup time was shorter(P<0.05).The incidence of nursing risks such as item loss,falls,and patient-nurse disputes was lower in the observation group compared to the control group(P<0.05).Conclusion:Applying detail-oriented nursing in health checkups at a health management center can effectively improve checkup quality and reduce the occurrence of nursing risks.
基金This work was supported by Universiti Sains Malaysia under external grant(Grant Number 304/PNAV/650958/U154).
文摘The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies.
文摘The rapid development of urbanization requires land management business should change the former single systematic pattern, and advance to integration of functions and data sharing. In order to meets the requirement, this paper presents a new thinking for land management pattern, and management tools of data center for integration of urban and rural areas. The tools were based on MapGIS, which have made the management of multi-subjects, multi-areas, multi-sources and multi-measurement data possible. The techniques of this system are designed accord with national related standard. Experimental result shows that the tools have obvious technical advantage in land resource business integration management.
文摘The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.
文摘To provide scientific management basis for the garden planning, project construction, maintenance, social service, this paper prompted that the urban gardening administration sectors need to construct gardening information management system. On the basis of fully requirements analysis of gardening sectors, this paper discussed the key technology for system construction. It also proposed to flexibly and smartly build up the system by using the secondary development design environment and running environment based on data center integration development platform. This system greatly helps the daily management and plays very important role in improving urban ecological environment and investment environment.
基金supported in part by the HUT Distributed and Mobile Cloud Systems research project and Tekes within the ITEA2 project 10014 EASI-CLOUDS
文摘In recent years,dual-homed topologies have appeared in data centers in order to offer higher aggregate bandwidth by using multiple paths simultaneously.Multipath TCP(MPTCP) has been proposed as a replacement for TCP in those topologies as it can efficiently offer improved throughput and better fairness.However,we have found that MPTCP has a problem in terms of incast collapse where the receiver suffers a drastic goodput drop when it simultaneously requests data over multiple servers.In this paper,we investigate why the goodput collapses even if MPTCP is able to actively relieve hot spots.In order to address the problem,we propose an equally-weighted congestion control algorithm for MPTCP,namely EW-MPTCP,without need for centralized control,additional infrastructure and a hardware upgrade.In our scheme,in addition to the coupled congestion control performed on each subflow of an MPTCP connection,we allow each subflow to perform an additional congestion control operation by weighting the congestion window in reverse proportion to the number of servers.The goal is to mitigate incast collapse by allowing multiple MPTCP subflows to compete fairly with a single-TCP flow at the shared bottleneck.The simulation results show that our solution mitigates the incast problem and noticeably improves goodput in data centers.
基金supported by National Natural Science Foundation of China(61472042)Corporation Science and Technology Program of Global Energy Interconnection Group Ltd.(GEIGC-D-[2018]024)
文摘With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers.Globally,data centers will become the world’s largest users of energy consumption,with the ratio rising from 3%in 2017 to 4.5%in 2025.Due to its unique climate and energy-saving advantages,the high-latitude area in the Pan-Arctic region has gradually become a hotspot for data center site selection in recent years.In order to predict and analyze the future energy consumption and carbon emissions of global data centers,this paper presents a new method based on global data center traffic and power usage effectiveness(PUE)for energy consumption prediction.Firstly,global data center traffic growth is predicted based on the Cisco’s research.Secondly,the dynamic global average PUE and the high latitude PUE based on Romonet simulation model are obtained,and then global data center energy consumption with two different scenarios,the decentralized scenario and the centralized scenario,is analyzed quantitatively via the polynomial fitting method.The simulation results show that,in 2030,the global data center energy consumption and carbon emissions are reduced by about 301 billion kWh and 720 million tons CO2 in the centralized scenario compared with that of the decentralized scenario,which confirms that the establishment of data centers in the Pan-Arctic region in the future can effectively relief the climate change and energy problems.This study provides support for global energy consumption prediction,and guidance for the layout of future global data centers from the perspective of energy consumption.Moreover,it provides support of the feasibility of the integration of energy and information networks under the Global Energy Interconnection conception.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
基金supported part by the National Natural Science Foundation of China(61601252,61801254)Public Technology Projects of Zhejiang Province(LG-G18F020007)+1 种基金Zhejiang Provincial Natural Science Foundation of China(LY20F020008,LY18F020011,LY20F010004)K.C.Wong Magna Fund in Ningbo University。
文摘With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.However,traditional TCPs are ill-suited to such situations and always result in the inefficiency(e.g.missing the flow deadline,inevitable throughput collapse)of data transfers.This further degrades the user-perceived quality of service(QoS)in data centers.To reduce the flow completion time of mice and deadline-sensitive flows along with promoting the throughput of elephant flows,an efficient and deadline-aware priority-driven congestion control(PCC)protocol,which grants mice and deadline-sensitive flows the highest priority,is proposed in this paper.Specifically,PCC computes the priority of different flows according to the size of transmitted data,the remaining data volume,and the flows’deadline.Then PCC adjusts the congestion window according to the flow priority and the degree of network congestion.Furthermore,switches in data centers control the input/output of packets based on the flow priority and the queue length.Different from existing TCPs,to speed up the data transfers of mice and deadline-sensitive flows,PCC provides an effective method to compute and encode the flow priority explicitly.According to the flow priority,switches can manage packets efficiently and ensure the data transfers of high priority flows through a weighted priority scheduling with minor modification.The experimental results prove that PCC can improve the data transfer performance of mice and deadline-sensitive flows while guaranting the throughput of elephant flows.
基金supported by the National Key Research and Development Program of China under Grant No.2016YFB 0402302the National Natural Science Foundation of China under Grant No.91433206。
文摘Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoelectronics transceivers in DCs,as well as the advantages of silicon photonic chips fabricated by complementary metal oxide semiconductor process.We also summarize the research on the main components in silicon photonic transceivers.In particular,quantum dot lasers have shown great potential as light sources for silicon photonic integration—whether to adopt bonding method or monolithic integration—thanks to their unique advantages over the conventional quantum-well counterparts.Some of the solutions for highspeed optical interconnection in DCs are then discussed.Among them,wavelength division multiplexing and four-level pulseamplitude modulation have been widely studied and applied.At present,the application of coherent optical communication technology has moved from the backbone network,to the metro network,and then to DCs.
基金This research was partially supported by the National Grand Fundamental Research 973 Program of China under Grant (No. 2013CB329103), Natural Science Foundation of China grant (No. 61271171), the Fundamental Research Funds for the Central Universities (ZYGX2013J002, ZYGX2012J004, ZYGX2010J002, ZYGX2010J009), Guangdong Science and Technology Project (2012B090500003, 2012B091000163, 2012556031).
文摘Virtualization is a common technology for resource sharing in data center. To make efficient use of data center resources, the key challenge is to map customer demands (modeled as virtual data center, VDC) to the physical data center effectively. In this paper, we focus on this problem. Distinct with previous works, our study of VDC embedding problem is under the assumption that switch resource is the bottleneck of data center networks (DCNs). To this end, we not only propose relative cost to evaluate embedding strategy, decouple embedding problem into VM placement with marginal resource assignment and virtual link mapping with decided source-destination based on the property of fat-tree, but also design the traffic aware embedding algorithm (TAE) and first fit virtual link mapping (FFLM) to map virtual data center requests to a physical data center. Simulation results show that TAE+FFLM could increase acceptance rate and reduce network cost (about 49% in the case) at the same time. The traffie aware embedding algorithm reduces the load of core-link traffic and brings the optimization opportunity for data center network energy conservation.
基金supported by the Major State Basic Research Program of China (973 project No. 2013CB329301 and 2010CB327806)the Natural Science Fund of China (NSFC project No. 61372085, 61032003, 61271165 and 61202379)+1 种基金the Research Fund for the Doctoral Program of Higher Education of China (RFDP project No. 20120185110025, 20120185110030 and 20120032120041)supported by Tianjin Key Laboratory of Cognitive Computing and Application, School of Computer Science and Technology, Tianjin University, Tianjin, P. R. China
文摘We consider differentiated timecritical task scheduling in a N×N input queued optical packet s w itch to ens ure 100% throughput and meet different delay requirements among various modules of data center. Existing schemes either consider slot-by-slot scheduling with queue depth serving as the delay metric or assume that each input-output connection has the same delay bound in the batch scheduling mode. The former scheme neglects the effect of reconfiguration overhead, which may result in crippled system performance, while the latter cannot satisfy users' differentiated Quality of Service(Qo S) requirements. To make up these deficiencies, we propose a new batch scheduling scheme to meet the various portto-port delay requirements in a best-effort manner. Moreover, a speedup is considered to compensate for both the reconfiguration overhead and the unavoidable slots wastage in the switch fabric. With traffic matrix and delay constraint matrix given, this paper proposes two heuristic algorithms Stringent Delay First(SDF) and m-order SDF(m-SDF) to realize the 100% packet switching, while maximizing the delay constraints satisfaction ratio. The performance of our scheme is verified by extensive numerical simulations.
基金supported in part by National Key Research and Development Program of China(2019YFB2103200)NSFC(61672108),Open Subject Funds of Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory(SKX182010049)+1 种基金the Fundamental Research Funds for the Central Universities(5004193192019PTB-019)the Industrial Internet Innovation and Development Project 2018 of China.
文摘The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which may fail and affect the quality of service.Failure prediction is an important means of ensuring service availability.Predicting node failure in cloud-based data centers is challenging because the failure symptoms reflected have complex characteristics,and the distribution imbalance between the failure sample and the normal sample is widespread,resulting in inaccurate failure prediction.Targeting these challenges,this paper proposes a novel failure prediction method FP-STE(Failure Prediction based on Spatio-temporal Feature Extraction).Firstly,an improved recurrent neural network HW-GRU(Improved GRU based on HighWay network)and a convolutional neural network CNN are used to extract the temporal features and spatial features of multivariate data respectively to increase the discrimination of different types of failure symptoms which improves the accuracy of prediction.Then the intermediate results of the two models are added as features into SCSXGBoost to predict the possibility and the precise type of node failure in the future.SCS-XGBoost is an ensemble learning model that is improved by the integrated strategy of oversampling and cost-sensitive learning.Experimental results based on real data sets confirm the effectiveness and superiority of FP-STE.
文摘New and emerging use cases, such as the interconnection of geographically distributed data centers(DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and heterogeneous optical network domains. This heterogeneity is, not only due to the diverse data transmission and switching technologies, but also due to the different options of control plane techniques. In light of this, the problem of heterogeneous control plane interworking needs to be solved, and in particular, the solution must address the specific issues of multi-domain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints. In this article, some of the recent activities regarding the Software-Defined Networking(SDN) orchestration are reviewed to address such a multi-domain control plane interworking problem. Specifically, three different models, including the single SDN controller model, multiple SDN controllers in mesh, and multiple SDN controllers in a hierarchical setting, are presented for the DC interconnection network with multiple SDN/Open Flow domains or multiple Open Flow/Generalized Multi-Protocol Label Switching( GMPLS) heterogeneous domains. I n addition, two concrete implementations of the orchestration architectures are detailed, showing the overall feasibility and procedures of SDN orchestration for the end-to-endservice provisioning in multi-domain data center optical networks.
基金Supported by the National High Technology Research and Development Program of China under Grant No 2015AA016902the National Natural Science Foundation of China under Grant Nos 61435013 and 61405188the K.C.Wong Education Foundation
文摘An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid integration technology. Multimode output waveguides in the silica AWG with 2% refractive index difference are used to obtain fiat-top spectra. The output waveguide facet is polished to 45° bevel to change the light propagation direction into the mesa-type PIN PD, which simplifies the packaging process. The experimentM results show that the single channel I dB bandwidth of AWG ranges from 2.12nm to 3.06nm, the ROSA responsivity ranges from 0.097 A/W to 0.158A/W, and the 3dB bandwidth is up to 11 GHz. It is promising to be applied in the eight-lane WDM transmission system in data center interconnection.
基金This work is financially supported by the Ministry of Research and Technology of Indonesia(BRIN)in the project called“Penggunaan Immersion Cooling untukMeningkatkan Efisiensi Energi Data Center”.
文摘Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling is responsible for 40%of the usage.Therefore,this research proposes the immersion cooling method to solving the high energy consumption of data centers by cooling its component using two types of dielectric fluids.Four stages of experimentalmethods are used,such as fluid types,cooling effectiveness,optimization,and durability.Furthermore,benchmark software is used to measure the CPU maximum work with the temperature data performed for 24 h.The results of this study show that the immersion cooling reduces 13℃ lower temperature than the conventional cooling method which means it saves more energy consumption in the data center.The most optimum variable used to decrease the temperature is 1.5 lpm of flow rate and 800 rpm of fan rotation.Furthermore,the cooling performance of the dielectric fluids shows that the mineral oil(MO)is better than the virgin coconut oil(VCO).In durability experiment,there are no components damage after five months immersed in the fluid.
基金supported in part by National Key Basic Research Program of China (973 program) under Grant No.2011CB302506Important National Science & Technology Specific Projects: Next-Generation Broadband Wireless Mobile Communications Network under Grant No.2011ZX03002-001-01Innovative Research Groups of the National Natural Science Foundation of China under Grant No.60821001
文摘Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory and network bandwidth. As cloud computing allows uncoordinated and heterogeneous users to share a data center, competition for multiple resources has become increasingly severe. Motivated by the differences on integrated utilization obtained from different packing schemes, in this paper we take the scheduling problem as a multi-dimensional combinatorial optimization problem with constraint satisfaction. With NP hardness, we present Multiple attribute decision based Integrated Resource Scheduling (MIRS), and a novel heuristic algorithm to gain the approximate optimal solution. Refers to simulation results, in face of various workload sets, our algorithm has significant superiorities in terms of efficiency and performance compared with previous methods.
基金supported by the ZTE-BJTU Collaborative Research Program under Grant No. K11L00190the Fundamental Research Funds for the Central Universities under Grant No. K12JB00060
文摘1 Introduction The history of data centers can be traced back to the 1960s. Early data centers were deployed on main- frames that were time-shared by users via remote terminals. The boom in data centers came duringthe internet era. Many companies started building large inter- net-connected facililies,