The Internet of Things(IoT)and cloud technologies have encouraged massive data storage at central repositories.Software-defined networks(SDN)support the processing of data and restrict the transmission of duplicate va...The Internet of Things(IoT)and cloud technologies have encouraged massive data storage at central repositories.Software-defined networks(SDN)support the processing of data and restrict the transmission of duplicate values.It is necessary to use a data de-duplication mechanism to reduce communication costs and storage overhead.Existing State of the art schemes suffer from computational overhead due to deterministic or random tree-based tags generation which further increases as the file size grows.This paper presents an efficient file-level de-duplication scheme(EFDS)where the cost of creating tags is reduced by employing a hash table with key-value pair for each block of the file.Further,an algorithm for hash table-based duplicate block identification and storage(HDBIS)is presented based on fingerprints that maintain a linked list of similar duplicate blocks on the same index.Hash tables normally have a consistent time complexity for lookup,generating,and deleting stored data regardless of the input size.The experiential results show that the proposed EFDS scheme performs better compared to its counterparts.展开更多
As a solution for data storage and information sharing for peer-to-peer(P2P)networks,a novel distributed hash table(DHT)structure called PChord is presented in this paper.PChord adopts a bi-directional searching mecha...As a solution for data storage and information sharing for peer-to-peer(P2P)networks,a novel distributed hash table(DHT)structure called PChord is presented in this paper.PChord adopts a bi-directional searching mechanism superior to Chord and enhances the structure of the finger table.Based on Hilbert space filling curve,PChord realizes the mapping mechanism for multikeyword approximate searching.Compared with the Chord and Kademlia protocols,PChord evidently increases speed on resource searching and message spreading via theoretic proof and simulation results,while maintaining satisfactory load balance.展开更多
In recent years, reconstructing a sparse map from a simultaneous localization and mapping(SLAM) system on a conventional CPU has undergone remarkable progress. However,obtaining a dense map from the system often requi...In recent years, reconstructing a sparse map from a simultaneous localization and mapping(SLAM) system on a conventional CPU has undergone remarkable progress. However,obtaining a dense map from the system often requires a highperformance GPU to accelerate computation. This paper proposes a dense mapping approach which can remove outliers and obtain a clean 3D model using a CPU in real-time. The dense mapping approach processes keyframes and establishes data association by using multi-threading technology. The outliers are removed by changing detections of associated vertices between keyframes. The implicit surface data of inliers is represented by a truncated signed distance function and fused with an adaptive weight. A global hash table and a local hash table are used to store and retrieve surface data for data-reuse. Experiment results show that the proposed approach can precisely remove the outliers in scene and obtain a dense 3D map with a better visual effect in real-time.展开更多
In the traditional Intemet Protocol (IP) architecture, there is an overload of IP sermntic problems. Existing solutions focused mainly on the infrastructure for the fixed network, and there is a lack of support for ...In the traditional Intemet Protocol (IP) architecture, there is an overload of IP sermntic problems. Existing solutions focused mainly on the infrastructure for the fixed network, and there is a lack of support for Mobile Ad Hoc Networks (MANETs). To improve scalability, a routing protocol for MANETs is presented based on a locator named Tree-structure Locator Distance Vector (TLDV). The hard core of this routing method is the identifier/locator split by the Distributed Hash Table (DHT) method, which provides a scalable routing service. The node locator indicates its relative location in the network and should be updated whenever topology changes, kocator space ks organized as a tree-structure, and the basic routing operation of the TLDV protocol is presented. TLDV protocol is compared to some classical routing protocols for MANETs on the NS2 platform Results show that TLDV has better scalability. Key words:展开更多
Video data location plays a key role for Peer-to-Peer (P2P) live streaming applications. In this paper, we propose a new one-hop Distributed Hash Table (DHT) lookup frarrework called Strearre ing-DHT (SDHT) to p...Video data location plays a key role for Peer-to-Peer (P2P) live streaming applications. In this paper, we propose a new one-hop Distributed Hash Table (DHT) lookup frarrework called Strearre ing-DHT (SDHT) to provide efficient video data location service. By adopting an enhanced events dissemination mechanism-EDRA+, the accuracy of routing table at peers can be guaranteed. More importantly, in order to enhance the perforlmnce of video data lookup operation without incurring extra overhead, we design a so-called Distributed Index Mapping and Management Mechanism (DIMM) for SDHT. Both analytical modeling and intensive simulation experiments are conducted to demonstrate the effectiveness of SDHT framework. Numerical results show that almost 90% requested video data can be retrieved within one second in SDHT based systems, and SDHT needs only 26% average bandwidth consumption when compared with similar one-hop DHT solutions such as D1HT. This indicates that SDHT framework is an appropriate data lookup solution for time-sensitive network applications such as P2P live streaming.展开更多
For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper leng...For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper length-detecting is a straightforward yet efficient solution. Binary search strategy can reduce the number of required hash detecting in the worst case. However, to assure the searching path correct in such a schema, either backtrack searching or redundantly storing some prefixes is required, leading to performance or memory issues as a result. In this paper, we make a deep study on the binary search, and propose a novel mechanism to ensure correct searching path without neither additional backtrack costs nor redundant memory consumptions. Along any binary search path, a bloom filter is employed at each branching point to verify whether a said prefix is present, instead of storing that prefix here. By this means, we can gain significantly optimization on memory efficiency, at the cost of bloom checking before each detecting. Our evaluation experiments on both real-world and randomly synthesized data sets demonstrate our superiorities clearly展开更多
Efficient lookup is essential for peer-to-peer networks and Chord is a representative peer-to-peer lookup scheme based on distributed hash table (DHT). In peer-to-peer networks, each node maintains several unidirectio...Efficient lookup is essential for peer-to-peer networks and Chord is a representative peer-to-peer lookup scheme based on distributed hash table (DHT). In peer-to-peer networks, each node maintains several unidirectional application layer links to other nodes and forwards lookup messages through such links. This paper proposes use of bidirectional links to improve the lookup performance in Chord. Every original unidirectional link is replaced by a bidirectional link, and accordingly every node becomes an anti-finger of all its finger nodes. Both theoretical analyses and experimental results indicate that these anti-fingers can help improve the lookup performance greatly with very low overhead.展开更多
Flow-based measurement is a popular method for various network monitoring usages.However, many flow exporting softwares have still low performance to collect all flows.In this paper, we propose a IPFIX-based flow expo...Flow-based measurement is a popular method for various network monitoring usages.However, many flow exporting softwares have still low performance to collect all flows.In this paper, we propose a IPFIX-based flow export engine with an enhanced and extensible data structure, called XFix, on the basis of a GPL tool,-nProbe.In the engine, we use an extensible two-dimensional hash table for flow aggregation, which is able to improve the performance of the metering process as well as support bidirectional flow.Experimental results have shown its efficiency in multi-thread processing activity.展开更多
Collaborative Filtering (CF) technique has proved to be one of the most successful techniques in recommendation systems in recent years. However, traditional centralized CF system has suffered from its limited scalabi...Collaborative Filtering (CF) technique has proved to be one of the most successful techniques in recommendation systems in recent years. However, traditional centralized CF system has suffered from its limited scalability as calculation complexity increases rapidly both in time and space when the record in the user database increases. Peer-to-peer (P2P) network has attracted much attention because of its advantage of scalability as an alternative architecture for CF systems. In this paper, authors propose a decentralized CF algorithm, called PipeCF, based on distributed hash table (DHT) method which is the most popular P2P routing algorithm because of its efficiency, scalability, and robustness. Authors also propose two novel approaches: significance refinement (SR) and unanimous amplification (UA), to improve the scalability and prediction accuracy of DHT-based CF algorithm. The experimental data show that our DHT-based CF system has better prediction accuracy, efficiency and scalability than traditional CF systems.展开更多
The maintaining overheads of Distributed Hash Table (DHT) topology have recently received considerable attention. This paper presents a novel SHT (Session Heterogeneity Topology) model, in which DHT is reconstructed w...The maintaining overheads of Distributed Hash Table (DHT) topology have recently received considerable attention. This paper presents a novel SHT (Session Heterogeneity Topology) model, in which DHT is reconstructed with session hetero- geneity. SHT clusters nodes by means of session heterogeneity among nodes and selects the stable nodes as the participants of DHT. With an evolving process, this model gradually makes DHT stable and reliable. Therefore the high maintaining overheads for DHT are effectively controlled. Simulation with real traces of session distribution showed that the maintaining overheads are reduced dramatically and that the data availability is greatly improved.展开更多
The capacities of the nodes in the peer-to-peer system are strongly heterogeneous, hence one can benefit from distributing the load, based on the capacity of the nodes. At first a model is discussed to evaluate the lo...The capacities of the nodes in the peer-to-peer system are strongly heterogeneous, hence one can benefit from distributing the load, based on the capacity of the nodes. At first a model is discussed to evaluate the load balancing of the heterogeneous system, and then a novel load balancing scheme is proposed based on the concept of logical servers and the randomized binary tree, and theoretical guarantees are given. Finally, the feasibility of the scheme using extensive simulations is proven.展开更多
The load balance is a critical issue of distributed Hash table (DHT), and the previous work shows that there exists O(logn) imbalance of load in Chord. The load distribution of Chord, Pastry, and the virtual serve...The load balance is a critical issue of distributed Hash table (DHT), and the previous work shows that there exists O(logn) imbalance of load in Chord. The load distribution of Chord, Pastry, and the virtual servers (VS) balancing scheme and deduces the closed form expressions of the probability density function (PDF) and cumulative distribution function (CDF) of the load in these DHTs is analyzes. The analysis and simulation show that the load of all these DHTs obeys the gamma distribution with similar formed parameters.展开更多
Anomaly detection has practical significance for finding unusual patterns in time series.However,most existing algorithms may lose some important information in time series presentation and have high time complexity.A...Anomaly detection has practical significance for finding unusual patterns in time series.However,most existing algorithms may lose some important information in time series presentation and have high time complexity.Another problem is that privacy-preserving was not taken into account in these algorithms.In this paper,we propose a new data structure named Interval Hash Table(IHTable)to capture more original information of time series and design a fast anomaly detection algorithm based on Interval Hash Table(ADIHT).The key insight of ADIHT is distributions of normal subsequences are always similar while distributions of anomaly subsequences are different and random by contrast.Furthermore,to make our proposed algorithm fit for anomaly detection under multiple participation,we propose a privacy-preserving anomaly detection scheme named OP-ADIHT based on ADIHT and homomorphic encryption.Compared with existing anomaly detection schemes with privacy-preserving,OP-ADIHT needs less communication cost and calculation cost.Security analysis of different circumstances also shows that OP-ADIHT will not leak the privacy information of participants.Extensive experiments results show that ADIHT can outperform most anomaly detection algorithms and perform close to the best results in terms of AUC-ROC,and ADIHT needs the least time.展开更多
Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing the...Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing their power consumption. In this paper, we propose GreenCHT, a reliable power management scheme for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme, a reliable distributed log store, and a predictive power mode scheduler (PMS). Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, we arrange the replicas of objects on nonoverlapping tiers of nodes in the ring. This allows the system to fall in various power modes by powering down subsets of servers while not violating data availability. The predictive PMS predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. To ensure that the reliability of the system is maintained when replicas are powered down, we distribute the writes to standby replicas to active servers, which ensures failure tolerance of the system. GreenCHT is implemented based on Sheepdog, a distributed key value storage system that uses consistent hashing as an underlying distributed hash table. By replaying 12 typical real workload traces collected from Microsoft, the evaluation results show that GreenCHT can provide significant power savings while maintaining a desired performance. We observe that GreenCHT can reduce power consumption by up to 35%-61%.展开更多
In the digital information age,distributed file storage technologies like the InterPlanetary File System(IPFS)have gained considerable traction as a means of storing and disseminating media content.Despite the advanta...In the digital information age,distributed file storage technologies like the InterPlanetary File System(IPFS)have gained considerable traction as a means of storing and disseminating media content.Despite the advantages of decentralized storage,the proliferation of decentralized technologies has highlighted the need to address the issue of file ownership.The aim of this paper is to address the critical issues of source verification and digital copyright protection for IPFS image files.To this end,an innovative approach is proposed that integrates blockchain,digital signature,and blind watermarking.Blockchain technology functions as a decentralized and tamper-resistant ledger,recording and verifying the source information of files,thereby establishing credible evidence of file origin.A digital signature serves to authenticate the identity and integrity of the individual responsible for uploading the file,ensuring data security.Furthermore,blind watermarking is employed to embed invisible information within images,thereby safeguarding digital copyrights and enabling file traceability.To further optimize the efficiency of file retrieval within IPFS,a dual-layer Distributed Hash Table(DHT)indexing structure is proposed.This structure divides file index information into a global index layer and a local index layer,significantly reducing retrieval time and network overhead.The feasibility of the proposed approach is demonstrated through practical examples,providing an effective solution to the copyright protection issues associated with IPFS image files.展开更多
Recently, peer-to-peer (P2P) search technique has become popular in the Web as an alternative to centralized search due to its high scalability and low deployment-cost. However, P2P search systems are known to suffe...Recently, peer-to-peer (P2P) search technique has become popular in the Web as an alternative to centralized search due to its high scalability and low deployment-cost. However, P2P search systems are known to suffer from the problem of peer dynamics, such as frequent node join/leave and document changes, which cause serious performance degradation. This paper presents the architecture of a P2P search system that supports full-text search in an overlay network with peer dynamics. This architecture, namely HAPS, consists of two layers of peers. The upper layer is a DHT (distributed hash table) network interconnected by some super peers (which we refer to as hubs). Each hub maintains distributed data structures called search directories, which could be used to guide the query and to control the search cost. The bottom layer consists of clusters of ordinary peers (called providers), which can receive queries and return relevant results. Extensive experimental results indicate that HAPS can perform searches effectively and efficiently. In addition, the performance comparison illustrates that HAPS outperforms a fiat structured system and a hierarchical unstructured system in the environment with peer dynamics.展开更多
Distributed network architecture and dynamic change of nodes makes the operation of structured peer-to-peer networks unpredictable. This article aims to present a research on the running rule of structured peer-to-pee...Distributed network architecture and dynamic change of nodes makes the operation of structured peer-to-peer networks unpredictable. This article aims to present a research on the running rule of structured peer-to-peer networks through a mathematical model. The proposed model provides a low-complexity means to estimate the performance of a structured peer-to-peer network from two aspects: the average existent time of a node and probability of returning to a temporarily steady state of network. On the basis of the results, it can be concluded that the proposed structured peer-to-peer network is suitable for those conditions where the frequency of node change is under limited value, and this value mainly depends on the initializing time of the node. Otherwise, structured peer-to-peer network can be abstracted as a network queuing system, which is composed of many node queuing systems in a meshy way and the relation between the throughput of the node system and network system is analyzed.展开更多
Load balancing is a critical issue in peer-to-peer networks. DHT (distributed hash tables) do not evenly partition the hash-function range, and some nodes get a larger portion of it. The loads of some nodes are as m...Load balancing is a critical issue in peer-to-peer networks. DHT (distributed hash tables) do not evenly partition the hash-function range, and some nodes get a larger portion of it. The loads of some nodes are as much as O(log n) times the average. In this paper, a low-cost, decentralized algorithm for ID allocation with complete knowledge in DHT-based system is proposed. It can adjust system load on nodes’ departure. It is proved that the ratio of longest arc to shortest arc is no more than 4 with high probability when network scale increases non-strictly. When network scale decreases from one stable state to another, algorithm can repair the unevenness of nodes distribution. The performance is analyzed in simulation. Simulating results show that updating messages only occupy a little of network bandwidth.展开更多
A vehicular ad-hoc network (VANET) can be visualized as a network of moving vehicles communicating in an asynchronous and autonomous fashion. Efficient and scalable information dissemination in VANET applications is...A vehicular ad-hoc network (VANET) can be visualized as a network of moving vehicles communicating in an asynchronous and autonomous fashion. Efficient and scalable information dissemination in VANET applications is a major challenge due to the movement of vehicles which causes unpredictable changes in network topology. The publish/subscribe communication paradigm provides decoupling in time, space, and synchronization between communicating entities, and presents itself as an elegant solution for information dissemination for VANET like environments. In this paper, we propose our approach for information dissemination which utilizes publish/subscribe and distributed hash table (DHT) based overlay networks. In our approach, we assume a hybrid VANET consisting of stationary info-stations and moving vehicles. These info-stations are installed at every major intersection of the city and vehicles can take the role of publisher, subscriber, or broker depending upon the context. The info-stations form a DHT based broker overlay among themselves and act as rendezvous points for related publications and subscriptions. Further, info-stations also assist in locating vehicles that have subscribed to information items. We consider different possible deployments of this hybrid VANET with respect to the number of info-stations and their physical connectivity with each other. We perform simulations to assess the performance of our approach in these different deployment scenarios and discuss their applicability in urban and semi-urban areas.展开更多
基金supported in part by Hankuk University of Foreign Studies’Research Fund for 2023 and in part by the National Research Foundation of Korea(NRF)grant funded by the Ministry of Science and ICT Korea No.2021R1F1A1045933.
文摘The Internet of Things(IoT)and cloud technologies have encouraged massive data storage at central repositories.Software-defined networks(SDN)support the processing of data and restrict the transmission of duplicate values.It is necessary to use a data de-duplication mechanism to reduce communication costs and storage overhead.Existing State of the art schemes suffer from computational overhead due to deterministic or random tree-based tags generation which further increases as the file size grows.This paper presents an efficient file-level de-duplication scheme(EFDS)where the cost of creating tags is reduced by employing a hash table with key-value pair for each block of the file.Further,an algorithm for hash table-based duplicate block identification and storage(HDBIS)is presented based on fingerprints that maintain a linked list of similar duplicate blocks on the same index.Hash tables normally have a consistent time complexity for lookup,generating,and deleting stored data regardless of the input size.The experiential results show that the proposed EFDS scheme performs better compared to its counterparts.
基金supported by the National Natural Science Foundation of China (Grant No.60773041)the Natural Science Foundation of Jiangsu Province,China (No.BK2008451)+6 种基金the National High Technology Research and Development Program of China (No.2006AA01Z219)the High Technology Research and Development Program of Nanjing City (Nos.2007RZ106,2007RZ127)Foundation of National Laboratory for Modern Communications (No.9140C1105040805)Project supported by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No.07KJB520083)Special Fund for Software Technology of Jiangsu Province,Post-doctoral Foundation of Jiangsu Province (No.0801019C)Science&Technology Innovation Fund for Higher Education Institutions of Jiangsu Province (Nos.CX08B-085Z,CX08B-086Z)the six kinds of Top Talent of Jiangsu Province.
文摘As a solution for data storage and information sharing for peer-to-peer(P2P)networks,a novel distributed hash table(DHT)structure called PChord is presented in this paper.PChord adopts a bi-directional searching mechanism superior to Chord and enhances the structure of the finger table.Based on Hilbert space filling curve,PChord realizes the mapping mechanism for multikeyword approximate searching.Compared with the Chord and Kademlia protocols,PChord evidently increases speed on resource searching and message spreading via theoretic proof and simulation results,while maintaining satisfactory load balance.
基金supported by the National Natural Science Foundation of China(61473202)。
文摘In recent years, reconstructing a sparse map from a simultaneous localization and mapping(SLAM) system on a conventional CPU has undergone remarkable progress. However,obtaining a dense map from the system often requires a highperformance GPU to accelerate computation. This paper proposes a dense mapping approach which can remove outliers and obtain a clean 3D model using a CPU in real-time. The dense mapping approach processes keyframes and establishes data association by using multi-threading technology. The outliers are removed by changing detections of associated vertices between keyframes. The implicit surface data of inliers is represented by a truncated signed distance function and fused with an adaptive weight. A global hash table and a local hash table are used to store and retrieve surface data for data-reuse. Experiment results show that the proposed approach can precisely remove the outliers in scene and obtain a dense 3D map with a better visual effect in real-time.
基金Acknowledgements This work was supported by the Hi-Tech Research and Development Program of China under Grant No.2007AA01Z407 the Co-Funding Project of Beijing Municipal education Commission under Grant No.JD100060630+3 种基金 National Foundation Research Project the National Natural Science Foundation Project under Grant No. 61170295 the Project of Aeronautical Science Foundation of China under Caant No.2011ZC51024 and the Fundamental Research Funds for the Central Universities.
文摘In the traditional Intemet Protocol (IP) architecture, there is an overload of IP sermntic problems. Existing solutions focused mainly on the infrastructure for the fixed network, and there is a lack of support for Mobile Ad Hoc Networks (MANETs). To improve scalability, a routing protocol for MANETs is presented based on a locator named Tree-structure Locator Distance Vector (TLDV). The hard core of this routing method is the identifier/locator split by the Distributed Hash Table (DHT) method, which provides a scalable routing service. The node locator indicates its relative location in the network and should be updated whenever topology changes, kocator space ks organized as a tree-structure, and the basic routing operation of the TLDV protocol is presented. TLDV protocol is compared to some classical routing protocols for MANETs on the NS2 platform Results show that TLDV has better scalability. Key words:
基金Acknowledgements This work was supported by the Key Projects for Science and Technology Development under Caant No. 2009ZX03004-002 the National Natural Science Foundation of China under Gants No. 60833002, No. 60772142+1 种基金 the National Science and Technology Fundamental Project under Grant No. 2008ZX03003-005 the Science & Technology Research Project of Chongqing Education Committee under Crant No. KJ120825.
文摘Video data location plays a key role for Peer-to-Peer (P2P) live streaming applications. In this paper, we propose a new one-hop Distributed Hash Table (DHT) lookup frarrework called Strearre ing-DHT (SDHT) to provide efficient video data location service. By adopting an enhanced events dissemination mechanism-EDRA+, the accuracy of routing table at peers can be guaranteed. More importantly, in order to enhance the perforlmnce of video data lookup operation without incurring extra overhead, we design a so-called Distributed Index Mapping and Management Mechanism (DIMM) for SDHT. Both analytical modeling and intensive simulation experiments are conducted to demonstrate the effectiveness of SDHT framework. Numerical results show that almost 90% requested video data can be retrieved within one second in SDHT based systems, and SDHT needs only 26% average bandwidth consumption when compared with similar one-hop DHT solutions such as D1HT. This indicates that SDHT framework is an appropriate data lookup solution for time-sensitive network applications such as P2P live streaming.
基金supported by the National Natural Science Foundation of China (Grant No. 61472130 and 61702174)the China Postdoctoral Science Foundation funded project
文摘For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper length-detecting is a straightforward yet efficient solution. Binary search strategy can reduce the number of required hash detecting in the worst case. However, to assure the searching path correct in such a schema, either backtrack searching or redundantly storing some prefixes is required, leading to performance or memory issues as a result. In this paper, we make a deep study on the binary search, and propose a novel mechanism to ensure correct searching path without neither additional backtrack costs nor redundant memory consumptions. Along any binary search path, a bloom filter is employed at each branching point to verify whether a said prefix is present, instead of storing that prefix here. By this means, we can gain significantly optimization on memory efficiency, at the cost of bloom checking before each detecting. Our evaluation experiments on both real-world and randomly synthesized data sets demonstrate our superiorities clearly
文摘Efficient lookup is essential for peer-to-peer networks and Chord is a representative peer-to-peer lookup scheme based on distributed hash table (DHT). In peer-to-peer networks, each node maintains several unidirectional application layer links to other nodes and forwards lookup messages through such links. This paper proposes use of bidirectional links to improve the lookup performance in Chord. Every original unidirectional link is replaced by a bidirectional link, and accordingly every node becomes an anti-finger of all its finger nodes. Both theoretical analyses and experimental results indicate that these anti-fingers can help improve the lookup performance greatly with very low overhead.
文摘Flow-based measurement is a popular method for various network monitoring usages.However, many flow exporting softwares have still low performance to collect all flows.In this paper, we propose a IPFIX-based flow export engine with an enhanced and extensible data structure, called XFix, on the basis of a GPL tool,-nProbe.In the engine, we use an extensible two-dimensional hash table for flow aggregation, which is able to improve the performance of the metering process as well as support bidirectional flow.Experimental results have shown its efficiency in multi-thread processing activity.
文摘Collaborative Filtering (CF) technique has proved to be one of the most successful techniques in recommendation systems in recent years. However, traditional centralized CF system has suffered from its limited scalability as calculation complexity increases rapidly both in time and space when the record in the user database increases. Peer-to-peer (P2P) network has attracted much attention because of its advantage of scalability as an alternative architecture for CF systems. In this paper, authors propose a decentralized CF algorithm, called PipeCF, based on distributed hash table (DHT) method which is the most popular P2P routing algorithm because of its efficiency, scalability, and robustness. Authors also propose two novel approaches: significance refinement (SR) and unanimous amplification (UA), to improve the scalability and prediction accuracy of DHT-based CF algorithm. The experimental data show that our DHT-based CF system has better prediction accuracy, efficiency and scalability than traditional CF systems.
基金Projects supported by the Science & Technology Committee of Shanghai Municipality Key Technologies R & D Project (No.03dz15027) and the Science & Technology Committee of ShanghaiMunicipality Key Project (No. 025115032), China
文摘The maintaining overheads of Distributed Hash Table (DHT) topology have recently received considerable attention. This paper presents a novel SHT (Session Heterogeneity Topology) model, in which DHT is reconstructed with session hetero- geneity. SHT clusters nodes by means of session heterogeneity among nodes and selects the stable nodes as the participants of DHT. With an evolving process, this model gradually makes DHT stable and reliable. Therefore the high maintaining overheads for DHT are effectively controlled. Simulation with real traces of session distribution showed that the maintaining overheads are reduced dramatically and that the data availability is greatly improved.
基金the Electronic Development Foundation of Information Industry Ministry China (2002546).
文摘The capacities of the nodes in the peer-to-peer system are strongly heterogeneous, hence one can benefit from distributing the load, based on the capacity of the nodes. At first a model is discussed to evaluate the load balancing of the heterogeneous system, and then a novel load balancing scheme is proposed based on the concept of logical servers and the randomized binary tree, and theoretical guarantees are given. Finally, the feasibility of the scheme using extensive simulations is proven.
基金supported by the National Development and Reform Commission of China (CNGI-04-12-1D).
文摘The load balance is a critical issue of distributed Hash table (DHT), and the previous work shows that there exists O(logn) imbalance of load in Chord. The load distribution of Chord, Pastry, and the virtual servers (VS) balancing scheme and deduces the closed form expressions of the probability density function (PDF) and cumulative distribution function (CDF) of the load in these DHTs is analyzes. The analysis and simulation show that the load of all these DHTs obeys the gamma distribution with similar formed parameters.
基金supported by Natural Science Foundation of Guangdong Province,China(Grant No.2020A1515010970)Shenzhen Research Council(Grant No.JCYJ20200109113427092,GJHZ20180928155209705).
文摘Anomaly detection has practical significance for finding unusual patterns in time series.However,most existing algorithms may lose some important information in time series presentation and have high time complexity.Another problem is that privacy-preserving was not taken into account in these algorithms.In this paper,we propose a new data structure named Interval Hash Table(IHTable)to capture more original information of time series and design a fast anomaly detection algorithm based on Interval Hash Table(ADIHT).The key insight of ADIHT is distributions of normal subsequences are always similar while distributions of anomaly subsequences are different and random by contrast.Furthermore,to make our proposed algorithm fit for anomaly detection under multiple participation,we propose a privacy-preserving anomaly detection scheme named OP-ADIHT based on ADIHT and homomorphic encryption.Compared with existing anomaly detection schemes with privacy-preserving,OP-ADIHT needs less communication cost and calculation cost.Security analysis of different circumstances also shows that OP-ADIHT will not leak the privacy information of participants.Extensive experiments results show that ADIHT can outperform most anomaly detection algorithms and perform close to the best results in terms of AUC-ROC,and ADIHT needs the least time.
文摘Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing their power consumption. In this paper, we propose GreenCHT, a reliable power management scheme for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme, a reliable distributed log store, and a predictive power mode scheduler (PMS). Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, we arrange the replicas of objects on nonoverlapping tiers of nodes in the ring. This allows the system to fall in various power modes by powering down subsets of servers while not violating data availability. The predictive PMS predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. To ensure that the reliability of the system is maintained when replicas are powered down, we distribute the writes to standby replicas to active servers, which ensures failure tolerance of the system. GreenCHT is implemented based on Sheepdog, a distributed key value storage system that uses consistent hashing as an underlying distributed hash table. By replaying 12 typical real workload traces collected from Microsoft, the evaluation results show that GreenCHT can provide significant power savings while maintaining a desired performance. We observe that GreenCHT can reduce power consumption by up to 35%-61%.
基金supported by the Doctoral Research Foundation of Chongqing Normal University(Nos.21XLB030,21XLB029)the Key Program of Chongqing Education Science Planning Project(No.K22YE205098).
文摘In the digital information age,distributed file storage technologies like the InterPlanetary File System(IPFS)have gained considerable traction as a means of storing and disseminating media content.Despite the advantages of decentralized storage,the proliferation of decentralized technologies has highlighted the need to address the issue of file ownership.The aim of this paper is to address the critical issues of source verification and digital copyright protection for IPFS image files.To this end,an innovative approach is proposed that integrates blockchain,digital signature,and blind watermarking.Blockchain technology functions as a decentralized and tamper-resistant ledger,recording and verifying the source information of files,thereby establishing credible evidence of file origin.A digital signature serves to authenticate the identity and integrity of the individual responsible for uploading the file,ensuring data security.Furthermore,blind watermarking is employed to embed invisible information within images,thereby safeguarding digital copyrights and enabling file traceability.To further optimize the efficiency of file retrieval within IPFS,a dual-layer Distributed Hash Table(DHT)indexing structure is proposed.This structure divides file index information into a global index layer and a local index layer,significantly reducing retrieval time and network overhead.The feasibility of the proposed approach is demonstrated through practical examples,providing an effective solution to the copyright protection issues associated with IPFS image files.
基金supported in part by the National Natural Science Foundation of China under Grant Nos. 60803003,60970124,60903038the Science and Technology Projects of Zhejiang Province under Grant No. 2008C14G2010007
文摘Recently, peer-to-peer (P2P) search technique has become popular in the Web as an alternative to centralized search due to its high scalability and low deployment-cost. However, P2P search systems are known to suffer from the problem of peer dynamics, such as frequent node join/leave and document changes, which cause serious performance degradation. This paper presents the architecture of a P2P search system that supports full-text search in an overlay network with peer dynamics. This architecture, namely HAPS, consists of two layers of peers. The upper layer is a DHT (distributed hash table) network interconnected by some super peers (which we refer to as hubs). Each hub maintains distributed data structures called search directories, which could be used to guide the query and to control the search cost. The bottom layer consists of clusters of ordinary peers (called providers), which can receive queries and return relevant results. Extensive experimental results indicate that HAPS can perform searches effectively and efficiently. In addition, the performance comparison illustrates that HAPS outperforms a fiat structured system and a hierarchical unstructured system in the environment with peer dynamics.
文摘Distributed network architecture and dynamic change of nodes makes the operation of structured peer-to-peer networks unpredictable. This article aims to present a research on the running rule of structured peer-to-peer networks through a mathematical model. The proposed model provides a low-complexity means to estimate the performance of a structured peer-to-peer network from two aspects: the average existent time of a node and probability of returning to a temporarily steady state of network. On the basis of the results, it can be concluded that the proposed structured peer-to-peer network is suitable for those conditions where the frequency of node change is under limited value, and this value mainly depends on the initializing time of the node. Otherwise, structured peer-to-peer network can be abstracted as a network queuing system, which is composed of many node queuing systems in a meshy way and the relation between the throughput of the node system and network system is analyzed.
文摘Load balancing is a critical issue in peer-to-peer networks. DHT (distributed hash tables) do not evenly partition the hash-function range, and some nodes get a larger portion of it. The loads of some nodes are as much as O(log n) times the average. In this paper, a low-cost, decentralized algorithm for ID allocation with complete knowledge in DHT-based system is proposed. It can adjust system load on nodes’ departure. It is proved that the ratio of longest arc to shortest arc is no more than 4 with high probability when network scale increases non-strictly. When network scale decreases from one stable state to another, algorithm can repair the unevenness of nodes distribution. The performance is analyzed in simulation. Simulating results show that updating messages only occupy a little of network bandwidth.
文摘A vehicular ad-hoc network (VANET) can be visualized as a network of moving vehicles communicating in an asynchronous and autonomous fashion. Efficient and scalable information dissemination in VANET applications is a major challenge due to the movement of vehicles which causes unpredictable changes in network topology. The publish/subscribe communication paradigm provides decoupling in time, space, and synchronization between communicating entities, and presents itself as an elegant solution for information dissemination for VANET like environments. In this paper, we propose our approach for information dissemination which utilizes publish/subscribe and distributed hash table (DHT) based overlay networks. In our approach, we assume a hybrid VANET consisting of stationary info-stations and moving vehicles. These info-stations are installed at every major intersection of the city and vehicles can take the role of publisher, subscriber, or broker depending upon the context. The info-stations form a DHT based broker overlay among themselves and act as rendezvous points for related publications and subscriptions. Further, info-stations also assist in locating vehicles that have subscribed to information items. We consider different possible deployments of this hybrid VANET with respect to the number of info-stations and their physical connectivity with each other. We perform simulations to assess the performance of our approach in these different deployment scenarios and discuss their applicability in urban and semi-urban areas.