期刊文献+
共找到6,822篇文章
< 1 2 250 >
每页显示 20 50 100
Large-scale spatial data visualization method based on augmented reality
1
作者 Xiaoning QIAO Wenming XIE +4 位作者 Xiaodong PENG Guangyun LI Dalin LI Yingyi GUO Jingyi REN 《虚拟现实与智能硬件(中英文)》 EI 2024年第2期132-147,共16页
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese... Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules. 展开更多
关键词 large-scale spatial data analysis Visual analysis technology Augmented reality 3D reconstruction Space environment
下载PDF
Low-power task scheduling algorithm for large-scale cloud data centers 被引量:3
2
作者 Xiaolong Xu Jiaxing Wu +1 位作者 Geng Yang Ruchuan Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2013年第5期870-878,共9页
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente... How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center. 展开更多
关键词 cloud computing data center task scheduling energy consumption.
下载PDF
Dynamic Routing of Multiple QoS-Required Flows in Cloud-Edge Autonomous Multi-Domain Data Center Networks
3
作者 Shiyan Zhang Ruohan Xu +3 位作者 Zhangbo Xu Cenhua Yu Yuyang Jiang Yuting Zhao 《Computers, Materials & Continua》 SCIE EI 2024年第2期2287-2308,共22页
The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections an... The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms. 展开更多
关键词 MULTI-DOMAIN data center networks AUTONOMOUS ROUTING
下载PDF
An Adaptive Congestion Control Optimization Strategy in SDN-Based Data Centers
4
作者 Jinlin Xu Wansu Pan +2 位作者 Haibo Tan Longle Cheng Xiaofeng Li 《Computers, Materials & Continua》 SCIE EI 2024年第11期2709-2726,共18页
The traffic within data centers exhibits bursts and unpredictable patterns.This rapid growth in network traffic has two consequences:it surpasses the inherent capacity of the network’s link bandwidth and creates an i... The traffic within data centers exhibits bursts and unpredictable patterns.This rapid growth in network traffic has two consequences:it surpasses the inherent capacity of the network’s link bandwidth and creates an imbalanced network load.Consequently,persistent overload situations eventually result in network congestion.The Software Defined Network(SDN)technology is employed in data centers as a network architecture to enhance performance.This paper introduces an adaptive congestion control strategy,named DA-DCTCP,for SDN-based Data Centers.It incorporates Explicit Congestion Notification(ECN)and Round-Trip Time(RTT)to establish congestion awareness and an ECN marking model.To mitigate incorrect congestion caused by abrupt flows,an appropriate ECN marking is selected based on the queue length and its growth slope,and the congestion window(CWND)is adjusted by calculating RTT.Simultaneously,the marking threshold for queue length is continuously adapted using the current queue length of the switch as a parameter to accommodate changes in data centers.The evaluation conducted through Mininet simulations demonstrates that DA-DCTCP yields advantages in terms of throughput,flow completion time(FCT),latency,and resistance against packet loss.These benefits contribute to reducing data center congestion,enhancing the stability of data transmission,and improving throughput. 展开更多
关键词 data centers SDN TCP congestion control RTT ECN
下载PDF
AMAD:Adaptive Mapping Approach for Datacenter Networks,an Energy-Friend Resource Allocation Framework via Repeated Leader Follower Game
5
作者 Ahmad Nahar Quttoum Muteb Alshammari 《Computers, Materials & Continua》 SCIE EI 2024年第9期4577-4601,共25页
Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict th... Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs. 展开更多
关键词 data center networks energy-aware resource management resource utilization game-theory mechanisms
下载PDF
Review of Load Balancing Mechanisms in SDN-Based Data Centers
6
作者 Qin Du Xin Cui +1 位作者 Haoyao Tang Xiangxiao Chen 《Journal of Computer and Communications》 2024年第1期49-66,共18页
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The... With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend. 展开更多
关键词 Software Defined Network data center Load Balancing Traffic Conflicts Traffic Scheduling
下载PDF
Regularized focusing inversion for large-scale gravity data based on GPU parallel computing
7
作者 WANG Haoran DING Yidan +1 位作者 LI Feida LI Jing 《Global Geology》 2019年第3期179-187,共9页
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes... Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape. 展开更多
关键词 large-scale gravity data GPU parallel computing CUDA equivalent geometric TRELLIS FOCUSING INVERSION
下载PDF
Trend Analysis of Large-Scale Twitter Data Based on Witnesses during a Hazardous Event: A Case Study on California Wildfire Evacuation
8
作者 Syed A. Morshed Khandakar Mamun Ahmed +1 位作者 Kamar Amine Kazi Ashraf Moinuddin 《World Journal of Engineering and Technology》 2021年第2期229-239,共11页
Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effectiv... Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span> 展开更多
关键词 WILDFIRE EVACUATION TWITTER large-scale data Topic Model Sentimental Analysis Trend Analysis
下载PDF
Semi-supervised Affinity Propagation Clustering Based on Subtractive Clustering for Large-Scale Data Sets
9
作者 Qi Zhu Huifu Zhang Quanqin Yang 《国际计算机前沿大会会议论文集》 2015年第1期76-77,共2页
In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore,... In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore, this paper proposes an improved affinity propagation clustering algorithm. First, add the subtraction clustering, using the density value of the data points to obtain the point of initial clusters. Then, calculate the similarity distance between the initial cluster points, and reference the idea of semi-supervised clustering, adding pairs restriction information, structure sparse similarity matrix. Finally, the cluster representative points conduct AP clustering until a suitable cluster division.Experimental results show that the algorithm allows the calculation is greatly reduced, the similarity matrix storage capacity is also reduced, and better than the original algorithm on the clustering effect and processing speed. 展开更多
关键词 subtractive CLUSTERING INITIAL cluster AFFINITY propagation CLUSTERING SEMI-SUPERVISED CLUSTERING large-scale data SETS
下载PDF
PTCP Incast in Data Center Networks 被引量:6
10
作者 LI Ming Andrey Lukyanenko +1 位作者 Sasu Tarkoma Antti Yla-Jaaiski 《China Communications》 SCIE CSCD 2014年第4期25-37,共13页
In recent years,dual-homed topologies have appeared in data centers in order to offer higher aggregate bandwidth by using multiple paths simultaneously.Multipath TCP(MPTCP) has been proposed as a replacement for TCP i... In recent years,dual-homed topologies have appeared in data centers in order to offer higher aggregate bandwidth by using multiple paths simultaneously.Multipath TCP(MPTCP) has been proposed as a replacement for TCP in those topologies as it can efficiently offer improved throughput and better fairness.However,we have found that MPTCP has a problem in terms of incast collapse where the receiver suffers a drastic goodput drop when it simultaneously requests data over multiple servers.In this paper,we investigate why the goodput collapses even if MPTCP is able to actively relieve hot spots.In order to address the problem,we propose an equally-weighted congestion control algorithm for MPTCP,namely EW-MPTCP,without need for centralized control,additional infrastructure and a hardware upgrade.In our scheme,in addition to the coupled congestion control performed on each subflow of an MPTCP connection,we allow each subflow to perform an additional congestion control operation by weighting the congestion window in reverse proportion to the number of servers.The goal is to mitigate incast collapse by allowing multiple MPTCP subflows to compete fairly with a single-TCP flow at the shared bottleneck.The simulation results show that our solution mitigates the incast problem and noticeably improves goodput in data centers. 展开更多
关键词 TCP MPTCP incast collapse congestion control data centers
下载PDF
Energy consumption and emission mitigation prediction based on data center traffic and PUE for global data centers 被引量:13
11
作者 Yanan Liu Xiaoxia Wei +3 位作者 Jinyu Xiao Zhijie Liu Yang Xu Yun Tian 《Global Energy Interconnection》 2020年第3期272-282,共11页
With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers.... With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers.Globally,data centers will become the world’s largest users of energy consumption,with the ratio rising from 3%in 2017 to 4.5%in 2025.Due to its unique climate and energy-saving advantages,the high-latitude area in the Pan-Arctic region has gradually become a hotspot for data center site selection in recent years.In order to predict and analyze the future energy consumption and carbon emissions of global data centers,this paper presents a new method based on global data center traffic and power usage effectiveness(PUE)for energy consumption prediction.Firstly,global data center traffic growth is predicted based on the Cisco’s research.Secondly,the dynamic global average PUE and the high latitude PUE based on Romonet simulation model are obtained,and then global data center energy consumption with two different scenarios,the decentralized scenario and the centralized scenario,is analyzed quantitatively via the polynomial fitting method.The simulation results show that,in 2030,the global data center energy consumption and carbon emissions are reduced by about 301 billion kWh and 720 million tons CO2 in the centralized scenario compared with that of the decentralized scenario,which confirms that the establishment of data centers in the Pan-Arctic region in the future can effectively relief the climate change and energy problems.This study provides support for global energy consumption prediction,and guidance for the layout of future global data centers from the perspective of energy consumption.Moreover,it provides support of the feasibility of the integration of energy and information networks under the Global Energy Interconnection conception. 展开更多
关键词 data center Pan-Arctic Energy consumption carbon emission data traffic PUE Global Energy Interconnection
下载PDF
An Efficient Priority-Driven Congestion Control Algorithm for Data Center Networks 被引量:3
12
作者 Jiahua Zhu Xianliang Jiang +4 位作者 Yan Yu Guang Jin Haiming Chen Xiaohui Li Long Qu 《China Communications》 SCIE CSCD 2020年第6期37-50,共14页
With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.Howeve... With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.However,traditional TCPs are ill-suited to such situations and always result in the inefficiency(e.g.missing the flow deadline,inevitable throughput collapse)of data transfers.This further degrades the user-perceived quality of service(QoS)in data centers.To reduce the flow completion time of mice and deadline-sensitive flows along with promoting the throughput of elephant flows,an efficient and deadline-aware priority-driven congestion control(PCC)protocol,which grants mice and deadline-sensitive flows the highest priority,is proposed in this paper.Specifically,PCC computes the priority of different flows according to the size of transmitted data,the remaining data volume,and the flows’deadline.Then PCC adjusts the congestion window according to the flow priority and the degree of network congestion.Furthermore,switches in data centers control the input/output of packets based on the flow priority and the queue length.Different from existing TCPs,to speed up the data transfers of mice and deadline-sensitive flows,PCC provides an effective method to compute and encode the flow priority explicitly.According to the flow priority,switches can manage packets efficiently and ensure the data transfers of high priority flows through a weighted priority scheduling with minor modification.The experimental results prove that PCC can improve the data transfer performance of mice and deadline-sensitive flows while guaranting the throughput of elephant flows. 展开更多
关键词 data center network low-latency PRIORITY switch scheduling transmission control protocol
下载PDF
Cost-Aware Multi-Domain Virtual Data Center Embedding 被引量:1
13
作者 Xiao Ma Zhongbao Zhang Sen Su 《China Communications》 SCIE CSCD 2018年第12期190-207,共18页
Virtual data center is a new form of cloud computing concept applied to data center. As one of the most important challenges, virtual data center embedding problem has attracted much attention from researchers. In dat... Virtual data center is a new form of cloud computing concept applied to data center. As one of the most important challenges, virtual data center embedding problem has attracted much attention from researchers. In data centers, energy issue is very important for the reality that data center energy consumption has increased by dozens of times in the last decade. In this paper, we are concerned about the cost-aware multi-domain virtual data center embedding problem. In order to solve this problem, this paper first addresses the energy consumption model. The model includes the energy consumption model of the virtual machine node and the virtual switch node, to quantify the energy consumption in the virtual data center embedding process. Based on the energy consumption model above, this paper presents a heuristic algorithm for cost-aware multi-domain virtual data center embedding. The algorithm consists of two steps: inter-domain embedding and intra-domain embedding. Inter-domain virtual data center embedding refers to dividing virtual data center requests into several slices to select the appropriate single data center. Intra-domain virtual data center refers to embedding virtual data center requests in each data center. We first propose an inter-domain virtual data center embedding algorithm based on label propagation to select the appropriate single data center. We then propose a cost-aware virtual data center embedding algorithm to perform the intra-domain data center embedding. Extensive simulation results show that our proposed algorithm in this paper can effectively reduce the energy consumption while ensuring the success ratio of embedding. 展开更多
关键词 virtual data center EMBEDDING MULTI-DOMAIN cost-aware LABEL PROPAGATION
下载PDF
Traffic-Aware VDC Embedding in Data Center: A Case Study of FatTree 被引量:2
14
作者 LUO Shouxi YU Hongfang +2 位作者 LI Lemin LIAO Dan SUN Gang 《China Communications》 SCIE CSCD 2014年第7期142-152,共11页
Virtualization is a common technology for resource sharing in data center. To make efficient use of data center resources, the key challenge is to map customer demands (modeled as virtual data center, VDC) to the ph... Virtualization is a common technology for resource sharing in data center. To make efficient use of data center resources, the key challenge is to map customer demands (modeled as virtual data center, VDC) to the physical data center effectively. In this paper, we focus on this problem. Distinct with previous works, our study of VDC embedding problem is under the assumption that switch resource is the bottleneck of data center networks (DCNs). To this end, we not only propose relative cost to evaluate embedding strategy, decouple embedding problem into VM placement with marginal resource assignment and virtual link mapping with decided source-destination based on the property of fat-tree, but also design the traffic aware embedding algorithm (TAE) and first fit virtual link mapping (FFLM) to map virtual data center requests to a physical data center. Simulation results show that TAE+FFLM could increase acceptance rate and reduce network cost (about 49% in the case) at the same time. The traffie aware embedding algorithm reduces the load of core-link traffic and brings the optimization opportunity for data center network energy conservation. 展开更多
关键词 virtual data center EMBEDDING switch capacity fat-tree
下载PDF
Silicon photonic transceivers for application in data centers 被引量:3
15
作者 Haomiao Wang Hongyu Chai +4 位作者 Zunren Lv Zhongkai Zhang Lei Meng Xiaoguang Yang Tao Yang 《Journal of Semiconductors》 EI CAS CSCD 2020年第10期1-16,共16页
Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoele... Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoelectronics transceivers in DCs,as well as the advantages of silicon photonic chips fabricated by complementary metal oxide semiconductor process.We also summarize the research on the main components in silicon photonic transceivers.In particular,quantum dot lasers have shown great potential as light sources for silicon photonic integration—whether to adopt bonding method or monolithic integration—thanks to their unique advantages over the conventional quantum-well counterparts.Some of the solutions for highspeed optical interconnection in DCs are then discussed.Among them,wavelength division multiplexing and four-level pulseamplitude modulation have been widely studied and applied.At present,the application of coherent optical communication technology has moved from the backbone network,to the metro network,and then to DCs. 展开更多
关键词 data center silicon-based optoelectronic transceiver high-speed optical interconnection quantum dot lasers
下载PDF
Delay-Differentiated Scheduling in Optical Packet Switches for Cloud Data Centers 被引量:2
16
作者 LI Yaofang XIAO Jie +5 位作者 WU Bin WEN Hong YU Hongfang YANG Shu XIN Shanshan GUO Jianing 《China Communications》 SCIE CSCD 2015年第8期22-32,共11页
We consider differentiated timecritical task scheduling in a N×N input queued optical packet s w itch to ens ure 100% throughput and meet different delay requirements among various modules of data center. Existin... We consider differentiated timecritical task scheduling in a N×N input queued optical packet s w itch to ens ure 100% throughput and meet different delay requirements among various modules of data center. Existing schemes either consider slot-by-slot scheduling with queue depth serving as the delay metric or assume that each input-output connection has the same delay bound in the batch scheduling mode. The former scheme neglects the effect of reconfiguration overhead, which may result in crippled system performance, while the latter cannot satisfy users' differentiated Quality of Service(Qo S) requirements. To make up these deficiencies, we propose a new batch scheduling scheme to meet the various portto-port delay requirements in a best-effort manner. Moreover, a speedup is considered to compensate for both the reconfiguration overhead and the unavoidable slots wastage in the switch fabric. With traffic matrix and delay constraint matrix given, this paper proposes two heuristic algorithms Stringent Delay First(SDF) and m-order SDF(m-SDF) to realize the 100% packet switching, while maximizing the delay constraints satisfaction ratio. The performance of our scheme is verified by extensive numerical simulations. 展开更多
关键词 delay-differentiated packetscheduling optical switch data center cloudservice
下载PDF
FP-STE: A Novel Node Failure Prediction Method Based on Spatio-Temporal Feature Extraction in Data Centers 被引量:2
17
作者 Yang Yang Jing Dong +2 位作者 Chao Fang Ping Xie Na An 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第6期1015-1031,共17页
The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which... The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which may fail and affect the quality of service.Failure prediction is an important means of ensuring service availability.Predicting node failure in cloud-based data centers is challenging because the failure symptoms reflected have complex characteristics,and the distribution imbalance between the failure sample and the normal sample is widespread,resulting in inaccurate failure prediction.Targeting these challenges,this paper proposes a novel failure prediction method FP-STE(Failure Prediction based on Spatio-temporal Feature Extraction).Firstly,an improved recurrent neural network HW-GRU(Improved GRU based on HighWay network)and a convolutional neural network CNN are used to extract the temporal features and spatial features of multivariate data respectively to increase the discrimination of different types of failure symptoms which improves the accuracy of prediction.Then the intermediate results of the two models are added as features into SCSXGBoost to predict the possibility and the precise type of node failure in the future.SCS-XGBoost is an ensemble learning model that is improved by the integrated strategy of oversampling and cost-sensitive learning.Experimental results based on real data sets confirm the effectiveness and superiority of FP-STE. 展开更多
关键词 Failure prediction data center features extraction XGBoost service availability
下载PDF
Multi-Dimensional Aware Scheduling for Co-optimizing Utilization in Data Center 被引量:1
18
作者 孙鑫 徐鹏 +1 位作者 双锴 苏森 《China Communications》 SCIE CSCD 2011年第6期19-27,共9页
Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory... Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory and network bandwidth. As cloud computing allows uncoordinated and heterogeneous users to share a data center, competition for multiple resources has become increasingly severe. Motivated by the differences on integrated utilization obtained from different packing schemes, in this paper we take the scheduling problem as a multi-dimensional combinatorial optimization problem with constraint satisfaction. With NP hardness, we present Multiple attribute decision based Integrated Resource Scheduling (MIRS), and a novel heuristic algorithm to gain the approximate optimal solution. Refers to simulation results, in face of various workload sets, our algorithm has significant superiorities in terms of efficiency and performance compared with previous methods. 展开更多
关键词 virtual data center resource scheduling multiple attribute decision making EFFICIENCY performance
下载PDF
Data Center Network Architecture 被引量:2
19
作者 Yantao Sun Jing Cheng +1 位作者 Konggui Shi Qiang Liu 《ZTE Communications》 2013年第1期54-61,共8页
1 Introduction The history of data centers can be traced back to the 1960s. Early data centers were deployed on main- frames that were time-shared by users via remote terminals. The boom in data centers came duringthe... 1 Introduction The history of data centers can be traced back to the 1960s. Early data centers were deployed on main- frames that were time-shared by users via remote terminals. The boom in data centers came duringthe internet era. Many companies started building large inter- net-connected facililies, 展开更多
关键词 data center network network architecture network topology virtual machine migration
下载PDF
The Use of Single-Phase Immersion Cooling by Using Two Types of Dielectric Fluid for Data Center Energy Savings 被引量:3
20
作者 Nugroho Agung Pambudi Awibi Muhamad Yusuf Alfan Sarifudin 《Energy Engineering》 EI 2022年第1期275-286,共12页
Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling ... Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling is responsible for 40%of the usage.Therefore,this research proposes the immersion cooling method to solving the high energy consumption of data centers by cooling its component using two types of dielectric fluids.Four stages of experimentalmethods are used,such as fluid types,cooling effectiveness,optimization,and durability.Furthermore,benchmark software is used to measure the CPU maximum work with the temperature data performed for 24 h.The results of this study show that the immersion cooling reduces 13℃ lower temperature than the conventional cooling method which means it saves more energy consumption in the data center.The most optimum variable used to decrease the temperature is 1.5 lpm of flow rate and 800 rpm of fan rotation.Furthermore,the cooling performance of the dielectric fluids shows that the mineral oil(MO)is better than the virgin coconut oil(VCO).In durability experiment,there are no components damage after five months immersed in the fluid. 展开更多
关键词 SINGLE-PHASE immersion cooling data center dielectric fluid mineral oil virgin coconut oil
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部