期刊文献+
共找到6,405篇文章
< 1 2 250 >
每页显示 20 50 100
Dynamic Routing of Multiple QoS-Required Flows in Cloud-Edge Autonomous Multi-Domain Data Center Networks
1
作者 Shiyan Zhang Ruohan Xu +3 位作者 Zhangbo Xu Cenhua Yu Yuyang Jiang Yuting Zhao 《Computers, Materials & Continua》 SCIE EI 2024年第2期2287-2308,共22页
The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections an... The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms. 展开更多
关键词 MULTI-DOMAIN data center networks AUTONOMOUS ROUTING
下载PDF
Review of Load Balancing Mechanisms in SDN-Based Data Centers
2
作者 Qin Du Xin Cui +1 位作者 Haoyao Tang Xiangxiao Chen 《Journal of Computer and Communications》 2024年第1期49-66,共18页
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The... With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend. 展开更多
关键词 Software Defined Network data center Load Balancing Traffic Conflicts Traffic Scheduling
下载PDF
Fast and scalable routing protocols for data center networks
3
作者 Mihailo Vesovic Aleksandra Smiljanic Dusan Kostic 《Digital Communications and Networks》 SCIE CSCD 2023年第6期1340-1350,共11页
Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with s... Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with sufficient resources to facilitate efficient network utilization and minimize delays.In such dynamic networks,links frequently fail or get congested,making the recalculation of the shortest paths a computationally intensive problem.Various routing protocols were proposed to overcome this problem by focusing on network utilization rather than speed.Surprisingly,the design of fast shortest-path algorithms for data centers was largely neglected,though they are universal components of routing protocols.Moreover,parallelization techniques were mostly deployed for random network topologies,and not for regular topologies that are often found in data centers.The aim of this paper is to improve scalability and reduce the time required for the shortest-path calculation in data center networks by parallelization on general-purpose hardware.We propose a novel algorithm that parallelizes edge relaxations as a faster and more scalable solution for popular data center topologies. 展开更多
关键词 Routing protocols data center networks Parallel algorithms Distributed algorithms Algorithm design and analysis Shortest-path problem SCALABILITY
下载PDF
Data Center Traffic Scheduling Strategy for Minimization Congestion and Quality of Service Guaranteeing
4
作者 Chunzhi Wang Weidong Cao +1 位作者 Yalin Hu Jinhang Liu 《Computers, Materials & Continua》 SCIE EI 2023年第5期4377-4393,共17页
According to Cisco’s Internet Report 2020 white paper,there will be 29.3 billion connected devices worldwide by 2023,up from 18.4 billion in 2018.5G connections will generate nearly three times more traffic than 4G c... According to Cisco’s Internet Report 2020 white paper,there will be 29.3 billion connected devices worldwide by 2023,up from 18.4 billion in 2018.5G connections will generate nearly three times more traffic than 4G connections.While bringing a boom to the network,it also presents unprecedented challenges in terms of flow forwarding decisions.The path assignment mechanism used in traditional traffic schedulingmethods tends to cause local network congestion caused by the concentration of elephant flows,resulting in unbalanced network load and degraded quality of service.Using the centralized control of software-defined networks,this study proposes a data center traffic scheduling strategy for minimization congestion and quality of service guaranteeing(MCQG).The ideal transmission path is selected for data flows while considering the network congestion rate and quality of service.Different traffic scheduling strategies are used according to the characteristics of different service types in data centers.Reroute scheduling for elephant flows that tend to cause local congestion.The path evaluation function is formed by the maximum link utilization on the path,the number of elephant flows and the time delay,and the fast merit-seeking capability of the sparrow search algorithm is used to find the path with the lowest actual link overhead as the rerouting path for the elephant flows.It is used to reduce the possibility of local network congestion occurrence.Equal cost multi-path(ECMP)protocols with faster response time are used to schedulemouse flows with shorter duration.Used to guarantee the quality of service of the network.To achieve isolated transmission of various types of data streams.The experimental results show that the proposed strategy has higher throughput,better network load balancing,and better robustness compared to ECMP under different traffic models.In addition,because it can fully utilize the resources in the network,MCQG also outperforms another traffic scheduling strategy that does rerouting for elephant flows(namely Hedera).Compared withECMPandHedera,MCQGimproves average throughput by 11.73%and 4.29%,and normalized total throughput by 6.74%and 2.64%,respectively;MCQG improves link utilization by 23.25%and 15.07%;in addition,the average round-trip delay and packet loss rate fluctuate significantly less than the two compared strategies. 展开更多
关键词 Software-defined network data center network OpenFlow network congestion quality of service
下载PDF
Energy Cost Minimization Using String Matching Algorithm in Geo-Distributed Data Centers
5
作者 Muhammad Imran Khan Khalil Syed Adeel Ali Shah +3 位作者 Izaz Ahmad Khan Mohammad Hijji Muhammad Shiraz Qaisar Shaheen 《Computers, Materials & Continua》 SCIE EI 2023年第6期6305-6322,共18页
Data centers are being distributed worldwide by cloud service providers(CSPs)to save energy costs through efficient workload alloca-tion strategies.Many CSPs are challenged by the significant rise in user demands due ... Data centers are being distributed worldwide by cloud service providers(CSPs)to save energy costs through efficient workload alloca-tion strategies.Many CSPs are challenged by the significant rise in user demands due to their extensive energy consumption during workload pro-cessing.Numerous research studies have examined distinct operating cost mitigation techniques for geo-distributed data centers(DCs).However,oper-ating cost savings during workload processing,which also considers string-matching techniques in geo-distributed DCs,remains unexplored.In this research,we propose a novel string matching-based geographical load balanc-ing(SMGLB)technique to mitigate the operating cost of the geo-distributed DC.The primary goal of this study is to use a string-matching algorithm(i.e.,Boyer Moore)to compare the contents of incoming workloads to those of documents that have already been processed in a data center.A successful match prevents the global load balancer from sending the user’s request to a data center for processing and displaying the results of the previously processed workload to the user to save energy.On the contrary,if no match can be discovered,the global load balancer will allocate the incoming workload to a specific DC for processing considering variable energy prices,the number of active servers,on-site green energy,and traces of incoming workload.The results of numerical evaluations show that the SMGLB can minimize the operating expenses of the geo-distributed data centers more than the existing workload distribution techniques. 展开更多
关键词 String matching OPTIMIZATION geo-distributed data centers geographical load balancing green energy
下载PDF
Replication Strategy with Comprehensive Data Center Selection Method in Cloud Environments
6
作者 M.A.Fazlina Rohaya Latip +1 位作者 Hamidah Ibrahim Azizol Abdullah 《Computers, Materials & Continua》 SCIE EI 2023年第1期415-433,共19页
As the amount of data continues to grow rapidly,the variety of data produced by applications is becoming more affluent than ever.Cloud computing is the best technology evolving today to provide multi-services for the ... As the amount of data continues to grow rapidly,the variety of data produced by applications is becoming more affluent than ever.Cloud computing is the best technology evolving today to provide multi-services for the mass and variety of data.The cloud computing features are capable of processing,managing,and storing all sorts of data.Although data is stored in many high-end nodes,either in the same data centers or across many data centers in cloud,performance issues are still inevitable.The cloud replication strategy is one of best solutions to address risk of performance degradation in the cloud environment.The real challenge here is developing the right data replication strategy with minimal data movement that guarantees efficient network usage,low fault tolerance,and minimal replication frequency.The key problem discussed in this research is inefficient network usage discovered during selecting a suitable data center to store replica copies induced by inadequate data center selection criteria.Hence,to mitigate the issue,we proposed Replication Strategy with a comprehensive Data Center Selection Method(RS-DCSM),which can determine the appropriate data center to place replicas by considering three key factors:Popularity,space availability,and centrality.The proposed RS-DCSM was simulated using CloudSim and the results proved that data movement between data centers is significantly reduced by 14%reduction in overall replication frequency and 20%decrement in network usage,which outperformed the current replication strategy,known as Dynamic Popularity aware Replication Strategy(DPRS)algorithm. 展开更多
关键词 Cloud computing data replication replica placement data center merits replication algorithm
下载PDF
Exploring High-Performance Architecture for Data Center Networks
7
作者 Deshun Li Shaorong Sun +5 位作者 Qisen Wu Shuhua Weng Yuyin Tan Jiangyuan Yao Xiangdang Huang Xingcan Cao 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期433-443,共11页
As a critical infrastructure of cloud computing,data center networks(DCNs)directly determine the service performance of data centers,which provide computing services for various applications such as big data processin... As a critical infrastructure of cloud computing,data center networks(DCNs)directly determine the service performance of data centers,which provide computing services for various applications such as big data processing and artificial intelligence.However,current architectures of data center networks suffer from a long routing path and a low fault tolerance between source and destination servers,which is hard to satisfy the requirements of high-performance data center networks.Based on dual-port servers and Clos network structure,this paper proposed a novel architecture RClos to construct high-performance data center networks.Logically,the proposed architecture is constructed by inserting a dual-port server into each pair of adjacent switches in the fabric of switches,where switches are connected in the form of a ring Clos structure.We describe the structural properties of RClos in terms of network scale,bisection bandwidth,and network diameter.RClos architecture inherits characteristics of its embedded Clos network,which can accommodate a large number of servers with a small average path length.The proposed architecture embraces a high fault tolerance,which adapts to the construction of various data center networks.For example,the average path length between servers is 3.44,and the standardized bisection bandwidth is 0.8 in RClos(32,5).The result of numerical experiments shows that RClos enjoys a small average path length and a high network fault tolerance,which is essential in the construction of high-performance data center networks. 展开更多
关键词 data center networks dual-port server clos structure highperformance
下载PDF
A Brief Introduction to Infrastructure Planning for Next-Generation Smart Computing Data Centers
8
作者 Yun Zhou 《Journal of World Architecture》 2023年第6期12-18,共7页
Globally,digital technology and the digital economy have propelled technological revolution and industrial change,and it has become one of the main grounds of international industrial competition.It was estimated that... Globally,digital technology and the digital economy have propelled technological revolution and industrial change,and it has become one of the main grounds of international industrial competition.It was estimated that the scale of China’s digital economy would reach 50 trillion yuan in 2022,accounting for more than 40%of GDP,presenting great market potential and room for the growth of the digital economy.With the rapid development of the digital economy,the state attaches great importance to the construction of digital infrastructure and has introduced a series of policies to promote the systematic development and large-scale deployment of digital infrastructure.In 2022 the Chinese government planned to build 8 arithmetic hubs and 10 national data center clusters nationwide.To proactively address the future demand for AI across various scenarios,there is a need for a well-structured computing power infrastructure.The data center,serving as the pivotal hub for computing power,has evolved from the conventional cloud center to a more intelligent computing center,allowing for a diversified convergence of computing power supply.Besides,the data center accommodates a diverse array of arithmetic business forms from customers,reflecting the multi-industry developmental trend.The arithmetic service platform is consistently broadening its scope,with ongoing optimization and innovation in the design scheme of machine room processes.The widespread application of submerged phase-change liquid cooling technology and cold plate cooling technology introduces a series of new challenges to the construction of digital infrastructure.This paper delves into the design objectives,industry considerations,layout,and other dimensions of a smart computing center and proposes a new-generation data center solution that is“flexible,resilient,green,and low-carbon.” 展开更多
关键词 Smart computing data centers AI Dual carbon goals
下载PDF
Cloud Data Center Selection Using a Modified Differential Evolution 被引量:1
9
作者 Yousef Sanjalawe Mohammed Anbar +3 位作者 Salam Al-E’mari Rosni Abdullah Iznan Hasbullah Mohammed Aladaileh 《Computers, Materials & Continua》 SCIE EI 2021年第12期3179-3204,共26页
The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by... The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies. 展开更多
关键词 Cloud computing data center data center selection cloud service broker differential evolution user request
下载PDF
Energy consumption and emission mitigation prediction based on data center traffic and PUE for global data centers 被引量:9
10
作者 Yanan Liu Xiaoxia Wei +3 位作者 Jinyu Xiao Zhijie Liu Yang Xu Yun Tian 《Global Energy Interconnection》 2020年第3期272-282,共11页
With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers.... With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers.Globally,data centers will become the world’s largest users of energy consumption,with the ratio rising from 3%in 2017 to 4.5%in 2025.Due to its unique climate and energy-saving advantages,the high-latitude area in the Pan-Arctic region has gradually become a hotspot for data center site selection in recent years.In order to predict and analyze the future energy consumption and carbon emissions of global data centers,this paper presents a new method based on global data center traffic and power usage effectiveness(PUE)for energy consumption prediction.Firstly,global data center traffic growth is predicted based on the Cisco’s research.Secondly,the dynamic global average PUE and the high latitude PUE based on Romonet simulation model are obtained,and then global data center energy consumption with two different scenarios,the decentralized scenario and the centralized scenario,is analyzed quantitatively via the polynomial fitting method.The simulation results show that,in 2030,the global data center energy consumption and carbon emissions are reduced by about 301 billion kWh and 720 million tons CO2 in the centralized scenario compared with that of the decentralized scenario,which confirms that the establishment of data centers in the Pan-Arctic region in the future can effectively relief the climate change and energy problems.This study provides support for global energy consumption prediction,and guidance for the layout of future global data centers from the perspective of energy consumption.Moreover,it provides support of the feasibility of the integration of energy and information networks under the Global Energy Interconnection conception. 展开更多
关键词 data center Pan-Arctic Energy consumption carbon emission data traffic PUE Global Energy Interconnection
下载PDF
Low-power task scheduling algorithm for large-scale cloud data centers 被引量:2
11
作者 Xiaolong Xu Jiaxing Wu +1 位作者 Geng Yang Ruchuan Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2013年第5期870-878,共9页
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente... How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center. 展开更多
关键词 cloud computing data center task scheduling energy consumption.
下载PDF
The 8×10 GHz Receiver Optical Subassembly Based on Silica Hybrid Integration Technology for Data Center Interconnection 被引量:3
12
作者 李超懿 安俊明 +8 位作者 王九琦 王亮亮 张家顺 李建光 吴远大 王玥 尹小杰 李勇 钟飞 《Chinese Physics Letters》 SCIE CAS CSCD 2017年第10期39-43,共5页
An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid in... An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid integration technology. Multimode output waveguides in the silica AWG with 2% refractive index difference are used to obtain fiat-top spectra. The output waveguide facet is polished to 45° bevel to change the light propagation direction into the mesa-type PIN PD, which simplifies the packaging process. The experimentM results show that the single channel I dB bandwidth of AWG ranges from 2.12nm to 3.06nm, the ROSA responsivity ranges from 0.097 A/W to 0.158A/W, and the 3dB bandwidth is up to 11 GHz. It is promising to be applied in the eight-lane WDM transmission system in data center interconnection. 展开更多
关键词 AWG GHz Receiver Optical Subassembly Based on Silica Hybrid Integration Technology for data center Interconnection The 8 PD
下载PDF
An Efficient Priority-Driven Congestion Control Algorithm for Data Center Networks 被引量:2
13
作者 Jiahua Zhu Xianliang Jiang +4 位作者 Yan Yu Guang Jin Haiming Chen Xiaohui Li Long Qu 《China Communications》 SCIE CSCD 2020年第6期37-50,共14页
With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.Howeve... With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.However,traditional TCPs are ill-suited to such situations and always result in the inefficiency(e.g.missing the flow deadline,inevitable throughput collapse)of data transfers.This further degrades the user-perceived quality of service(QoS)in data centers.To reduce the flow completion time of mice and deadline-sensitive flows along with promoting the throughput of elephant flows,an efficient and deadline-aware priority-driven congestion control(PCC)protocol,which grants mice and deadline-sensitive flows the highest priority,is proposed in this paper.Specifically,PCC computes the priority of different flows according to the size of transmitted data,the remaining data volume,and the flows’deadline.Then PCC adjusts the congestion window according to the flow priority and the degree of network congestion.Furthermore,switches in data centers control the input/output of packets based on the flow priority and the queue length.Different from existing TCPs,to speed up the data transfers of mice and deadline-sensitive flows,PCC provides an effective method to compute and encode the flow priority explicitly.According to the flow priority,switches can manage packets efficiently and ensure the data transfers of high priority flows through a weighted priority scheduling with minor modification.The experimental results prove that PCC can improve the data transfer performance of mice and deadline-sensitive flows while guaranting the throughput of elephant flows. 展开更多
关键词 data center network low-latency PRIORITY switch scheduling transmission control protocol
下载PDF
The Use of Single-Phase Immersion Cooling by Using Two Types of Dielectric Fluid for Data Center Energy Savings 被引量:2
14
作者 Nugroho Agung Pambudi Awibi Muhamad Yusuf Alfan Sarifudin 《Energy Engineering》 EI 2022年第1期275-286,共12页
Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling ... Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling is responsible for 40%of the usage.Therefore,this research proposes the immersion cooling method to solving the high energy consumption of data centers by cooling its component using two types of dielectric fluids.Four stages of experimentalmethods are used,such as fluid types,cooling effectiveness,optimization,and durability.Furthermore,benchmark software is used to measure the CPU maximum work with the temperature data performed for 24 h.The results of this study show that the immersion cooling reduces 13℃ lower temperature than the conventional cooling method which means it saves more energy consumption in the data center.The most optimum variable used to decrease the temperature is 1.5 lpm of flow rate and 800 rpm of fan rotation.Furthermore,the cooling performance of the dielectric fluids shows that the mineral oil(MO)is better than the virgin coconut oil(VCO).In durability experiment,there are no components damage after five months immersed in the fluid. 展开更多
关键词 SINGLE-PHASE immersion cooling data center dielectric fluid mineral oil virgin coconut oil
下载PDF
Data Center Network Architecture 被引量:2
15
作者 Yantao Sun Jing Cheng +1 位作者 Konggui Shi Qiang Liu 《ZTE Communications》 2013年第1期54-61,共8页
1 Introduction The history of data centers can be traced back to the 1960s. Early data centers were deployed on main- frames that were time-shared by users via remote terminals. The boom in data centers came duringthe... 1 Introduction The history of data centers can be traced back to the 1960s. Early data centers were deployed on main- frames that were time-shared by users via remote terminals. The boom in data centers came duringthe internet era. Many companies started building large inter- net-connected facililies, 展开更多
关键词 data center network network architecture network topology virtual machine migration
下载PDF
FP-STE: A Novel Node Failure Prediction Method Based on Spatio-Temporal Feature Extraction in Data Centers 被引量:1
16
作者 Yang Yang Jing Dong +2 位作者 Chao Fang Ping Xie Na An 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第6期1015-1031,共17页
The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which... The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which may fail and affect the quality of service.Failure prediction is an important means of ensuring service availability.Predicting node failure in cloud-based data centers is challenging because the failure symptoms reflected have complex characteristics,and the distribution imbalance between the failure sample and the normal sample is widespread,resulting in inaccurate failure prediction.Targeting these challenges,this paper proposes a novel failure prediction method FP-STE(Failure Prediction based on Spatio-temporal Feature Extraction).Firstly,an improved recurrent neural network HW-GRU(Improved GRU based on HighWay network)and a convolutional neural network CNN are used to extract the temporal features and spatial features of multivariate data respectively to increase the discrimination of different types of failure symptoms which improves the accuracy of prediction.Then the intermediate results of the two models are added as features into SCSXGBoost to predict the possibility and the precise type of node failure in the future.SCS-XGBoost is an ensemble learning model that is improved by the integrated strategy of oversampling and cost-sensitive learning.Experimental results based on real data sets confirm the effectiveness and superiority of FP-STE. 展开更多
关键词 Failure prediction data center features extraction XGBoost service availability
下载PDF
Silicon photonic transceivers for application in data centers 被引量:1
17
作者 Haomiao Wang Hongyu Chai +4 位作者 Zunren Lv Zhongkai Zhang Lei Meng Xiaoguang Yang Tao Yang 《Journal of Semiconductors》 EI CAS CSCD 2020年第10期1-16,共16页
Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoele... Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoelectronics transceivers in DCs,as well as the advantages of silicon photonic chips fabricated by complementary metal oxide semiconductor process.We also summarize the research on the main components in silicon photonic transceivers.In particular,quantum dot lasers have shown great potential as light sources for silicon photonic integration—whether to adopt bonding method or monolithic integration—thanks to their unique advantages over the conventional quantum-well counterparts.Some of the solutions for highspeed optical interconnection in DCs are then discussed.Among them,wavelength division multiplexing and four-level pulseamplitude modulation have been widely studied and applied.At present,the application of coherent optical communication technology has moved from the backbone network,to the metro network,and then to DCs. 展开更多
关键词 data center silicon-based optoelectronic transceiver high-speed optical interconnection quantum dot lasers
下载PDF
Workload-aware request routing in cloud data center using software-defined networking
18
作者 Haitao Yuan Jing Bi Bohu Li 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2015年第1期151-160,共10页
Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic info... Large latency of applications will bring revenue loss to cloud infrastructure providers in the cloud data center. The existing controllers of software-defined networking architecture can fetch and process traffic information in the network. Therefore, the controllers can only optimize the network latency of applications. However, the serving latency of applications is also an important factor in delivered user-experience for arrival requests. Unintelligent request routing will cause large serving latency if arrival requests are allocated to overloaded virtual machines. To deal with the request routing problem, this paper proposes the workload-aware software-defined networking controller architecture. Then, request routing algorithms are proposed to minimize the total round trip time for every type of request by considering the congestion in the network and the workload in virtual machines(VMs). This paper finally provides the evaluation of the proposed algorithms in a simulated prototype. The simulation results show that the proposed methodology is efficient compared with the existing approaches. 展开更多
关键词 cloud data center(CDC) software-defined networking request routing resource allocation network latency optimization
下载PDF
Research on the Trusted Energy-Saving Transmission of Data Center Network
19
作者 Yubo Wang Bei Gong Mowei Gong 《China Communications》 SCIE CSCD 2016年第12期139-149,共11页
According to the high operating costs and a large number of energy waste in the current data center network architectures, we propose a kind of trusted flow preemption scheduling combining the energy-saving routing me... According to the high operating costs and a large number of energy waste in the current data center network architectures, we propose a kind of trusted flow preemption scheduling combining the energy-saving routing mechanism based on typical data center network architecture. The mechanism can make the network flow in its exclusive network link bandwidth and transmission path, which can improve the link utilization and the use of the network energy efficiency. Meanwhile, we apply trusted computing to guarantee the high security, high performance and high fault-tolerant routing forwarding service, which helps improving the average completion time of network flow. 展开更多
关键词 data center network architecture energy-saving routing mechanism trusted computing network energy consumption flow average completion time
下载PDF
Modeling TCP Incast Issue in Data Center Networks and an Adaptive Application-Layer Solution
20
作者 Jin-Tang Luo Jie Xu Jian Sun 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期84-91,共8页
In data centers, the transmission control protocol(TCP) incast causes catastrophic goodput degradation to applications with a many-to-one traffic pattern. In this paper, we intend to tame incast at the receiver-side a... In data centers, the transmission control protocol(TCP) incast causes catastrophic goodput degradation to applications with a many-to-one traffic pattern. In this paper, we intend to tame incast at the receiver-side application. Towards this goal, we first develop an analytical model that formulates the incast probability as a function of connection variables and network environment settings. We combine the model with the optimization theory and derive some insights into minimizing the incast probability through tuning connection variables related to applications. Then,enlightened by the analytical results, we propose an adaptive application-layer solution to the TCP incast.The solution equally allocates advertised windows to concurrent connections, and dynamically adapts the number of concurrent connections to the varying conditions. Simulation results show that our solution consistently eludes incast and achieves high goodput in various scenarios including the ones with multiple bottleneck links and background TCP traffic. 展开更多
关键词 Application-layer solution data center networks MODELING transmission control protocol(TCP) incast
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部