For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper leng...For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper length-detecting is a straightforward yet efficient solution. Binary search strategy can reduce the number of required hash detecting in the worst case. However, to assure the searching path correct in such a schema, either backtrack searching or redundantly storing some prefixes is required, leading to performance or memory issues as a result. In this paper, we make a deep study on the binary search, and propose a novel mechanism to ensure correct searching path without neither additional backtrack costs nor redundant memory consumptions. Along any binary search path, a bloom filter is employed at each branching point to verify whether a said prefix is present, instead of storing that prefix here. By this means, we can gain significantly optimization on memory efficiency, at the cost of bloom checking before each detecting. Our evaluation experiments on both real-world and randomly synthesized data sets demonstrate our superiorities clearly展开更多
基金supported by the National Natural Science Foundation of China (Grant No. 61472130 and 61702174)the China Postdoctoral Science Foundation funded project
文摘For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper length-detecting is a straightforward yet efficient solution. Binary search strategy can reduce the number of required hash detecting in the worst case. However, to assure the searching path correct in such a schema, either backtrack searching or redundantly storing some prefixes is required, leading to performance or memory issues as a result. In this paper, we make a deep study on the binary search, and propose a novel mechanism to ensure correct searching path without neither additional backtrack costs nor redundant memory consumptions. Along any binary search path, a bloom filter is employed at each branching point to verify whether a said prefix is present, instead of storing that prefix here. By this means, we can gain significantly optimization on memory efficiency, at the cost of bloom checking before each detecting. Our evaluation experiments on both real-world and randomly synthesized data sets demonstrate our superiorities clearly