Cloud computing technology is used in traveling wave fault location,which establishes a new technology platform for multi-terminal traveling wave fault location in complicated power systems.In this paper,multi-termina...Cloud computing technology is used in traveling wave fault location,which establishes a new technology platform for multi-terminal traveling wave fault location in complicated power systems.In this paper,multi-terminal traveling wave fault location network is developed,and massive data storage,management,and algorithm realization are implemented in the cloud computing platform.Based on network topology structure,the section connecting points for any lines and corresponding detection placement in the loop are determined first.The loop is divided into different sections,in which the shortest transmission path for any of the fault points is directly and uniquely obtained.In order to minimize the number of traveling wave acquisition unit(TWU),multi-objective optimal configuration model for TWU is then set up based on network full observability.Finally,according to the TWU distribution,fault section can be located by using temporal correlation,and the final fault location point can be precisely calculated by fusing all the times recorded in TWU.PSCAD/EMTDC simulation results show that the proposed method can quickly,accurately,and reliably locate the fault point under limited TWU with optimal placement.展开更多
The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputi...The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputing Center in Guangzhou, National Supercomputing Center in Jinan, National Supercomputing Center in Changsha, Shanghai Supercomputing Center, and Computer Network Information Center in Chinese Academy of Sciences) into a storage system that spans the wide area network (WAN), which realizes the unified management of global storage resources in China. At present, the GVDS has been successfully deployed in the China National Grid environment. However, when accessing and sharing remote data in the WAN, the GVDS will cause redundant transmission of data and waste a lot of network bandwidth resources. In this paper, we propose an edge cache system as a supplementary system of the GVDS to improve the performance of upper-level applications accessing and sharing remote data. Specifically, we first designs the architecture of the edge cache system, and then study the key technologies of this architecture: the edge cache index mechanism based on double-layer hashing, the edge cache replacement strategy based on the GDSF algorithm, the request routing based on consistent hashing method, and the cluster member maintenance method based on the SWIM protocol. The experimental results show that the edge cache system has successfully implemented the relevant operation functions (read, write, deletion, modification, etc.) and is compatible with the POSIX interface in terms of function. Further, it can greatly reduce the amount of data transmission and increase the data access bandwidth when the accessed file is located at the edge cache system in terms of performance, i.e., its performance is close to the performance of the network file system in the local area network (LAN).展开更多
Data deduplication for file communication across wide area network (WAN) in the applications such as file synchronization and mirroring of cloud environments usually achieves significant bandwidth saving at the cost...Data deduplication for file communication across wide area network (WAN) in the applications such as file synchronization and mirroring of cloud environments usually achieves significant bandwidth saving at the cost of significant time overheads of data deduplication. The time overheads include the time required for data deduplication at two geographi- cally distributed nodes (e.g., disk access bottleneck) and the duplication query/answer operations between the sender and the receiver, since each query or answer introduces at least one round-trip time (RTT) of latency. In this paper, we present a data deduplication system across WAN with metadata feedback and metadata utilization (MFMU), in order to harness the data deduplication related time overheads. In the proposed MFMU system, selective metadata feedbacks from the receiver to the sender are introduced to reduce the number of duplication query/answer operations. In addition, to harness the metadata related disk I/O operations at the receiver, as well as the bandwidth overhead introduced by the metadata feedbacks, a hysteresis hash re-chunking mechanism based metadata utilization component is introduced. Our experimental results demonstrated that MFMU achieved an average of 20%~40% deduplication acceleration with the bandwidth saving ratio not reduced by the metadata feedbacks, as compared with the "baseline" content defined chunking (CDC) used in LBFS (Low-bandwith Network File system) and exiting state-of-the-art Bimodal chunking algorithms based data deduplication solutions.展开更多
基金the Key Project of Smart Grid Technology and Equipment of National Key Research and Development Plan of China(2016YFB0900600)Project supported by the National Natural Science Foundation Fund for Distinguished Young Scholars(51425701)+2 种基金the National Natural Science Foundation of China(51207013)the Hunan Province Natural Science Fund for Distinguished Young Scholars(2015JJ1001)the Education Department of Hunan Province Project(15C0032).
文摘Cloud computing technology is used in traveling wave fault location,which establishes a new technology platform for multi-terminal traveling wave fault location in complicated power systems.In this paper,multi-terminal traveling wave fault location network is developed,and massive data storage,management,and algorithm realization are implemented in the cloud computing platform.Based on network topology structure,the section connecting points for any lines and corresponding detection placement in the loop are determined first.The loop is divided into different sections,in which the shortest transmission path for any of the fault points is directly and uniquely obtained.In order to minimize the number of traveling wave acquisition unit(TWU),multi-objective optimal configuration model for TWU is then set up based on network full observability.Finally,according to the TWU distribution,fault section can be located by using temporal correlation,and the final fault location point can be precisely calculated by fusing all the times recorded in TWU.PSCAD/EMTDC simulation results show that the proposed method can quickly,accurately,and reliably locate the fault point under limited TWU with optimal placement.
基金supported by the National Key Research and Development Program of China(2018YFB0203901)the National Natural Science Foundation of China(Grant No.61772053)+1 种基金the Hebei Youth Talents Support Project(BJ2019008)the Natural Science Foundation of Hebei Province(F2020204003).
文摘The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputing Center in Guangzhou, National Supercomputing Center in Jinan, National Supercomputing Center in Changsha, Shanghai Supercomputing Center, and Computer Network Information Center in Chinese Academy of Sciences) into a storage system that spans the wide area network (WAN), which realizes the unified management of global storage resources in China. At present, the GVDS has been successfully deployed in the China National Grid environment. However, when accessing and sharing remote data in the WAN, the GVDS will cause redundant transmission of data and waste a lot of network bandwidth resources. In this paper, we propose an edge cache system as a supplementary system of the GVDS to improve the performance of upper-level applications accessing and sharing remote data. Specifically, we first designs the architecture of the edge cache system, and then study the key technologies of this architecture: the edge cache index mechanism based on double-layer hashing, the edge cache replacement strategy based on the GDSF algorithm, the request routing based on consistent hashing method, and the cluster member maintenance method based on the SWIM protocol. The experimental results show that the edge cache system has successfully implemented the relevant operation functions (read, write, deletion, modification, etc.) and is compatible with the POSIX interface in terms of function. Further, it can greatly reduce the amount of data transmission and increase the data access bandwidth when the accessed file is located at the edge cache system in terms of performance, i.e., its performance is close to the performance of the network file system in the local area network (LAN).
基金This work was supported by the National Science Fund for Distinguished Young Scholars of China under Grant No. 61125102 and the State Key Program of National Natural Science Foundation of China under Grant No. 61133008.
文摘Data deduplication for file communication across wide area network (WAN) in the applications such as file synchronization and mirroring of cloud environments usually achieves significant bandwidth saving at the cost of significant time overheads of data deduplication. The time overheads include the time required for data deduplication at two geographi- cally distributed nodes (e.g., disk access bottleneck) and the duplication query/answer operations between the sender and the receiver, since each query or answer introduces at least one round-trip time (RTT) of latency. In this paper, we present a data deduplication system across WAN with metadata feedback and metadata utilization (MFMU), in order to harness the data deduplication related time overheads. In the proposed MFMU system, selective metadata feedbacks from the receiver to the sender are introduced to reduce the number of duplication query/answer operations. In addition, to harness the metadata related disk I/O operations at the receiver, as well as the bandwidth overhead introduced by the metadata feedbacks, a hysteresis hash re-chunking mechanism based metadata utilization component is introduced. Our experimental results demonstrated that MFMU achieved an average of 20%~40% deduplication acceleration with the bandwidth saving ratio not reduced by the metadata feedbacks, as compared with the "baseline" content defined chunking (CDC) used in LBFS (Low-bandwith Network File system) and exiting state-of-the-art Bimodal chunking algorithms based data deduplication solutions.