Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candi...Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candidate anchor histogram and the file-type specific knowledge to refine how anchors are determined when performing de- duplication of file data and enforces the selected average chunk size. CAC yields more chunks being found which in turn produces smaller average chtmks and a better reduction in data. We present a detailed evaluation of CAC and the experimental results show that this scheme can improve the compression ratio chunking for file types whose bytes are not randomly distributed (from 11.3% to 16.7% according to different datasets), and improve the write throughput on average by 9.7%.展开更多
The Internet of Things(IoT)and cloud technologies have encouraged massive data storage at central repositories.Software-defined networks(SDN)support the processing of data and restrict the transmission of duplicate va...The Internet of Things(IoT)and cloud technologies have encouraged massive data storage at central repositories.Software-defined networks(SDN)support the processing of data and restrict the transmission of duplicate values.It is necessary to use a data de-duplication mechanism to reduce communication costs and storage overhead.Existing State of the art schemes suffer from computational overhead due to deterministic or random tree-based tags generation which further increases as the file size grows.This paper presents an efficient file-level de-duplication scheme(EFDS)where the cost of creating tags is reduced by employing a hash table with key-value pair for each block of the file.Further,an algorithm for hash table-based duplicate block identification and storage(HDBIS)is presented based on fingerprints that maintain a linked list of similar duplicate blocks on the same index.Hash tables normally have a consistent time complexity for lookup,generating,and deleting stored data regardless of the input size.The experiential results show that the proposed EFDS scheme performs better compared to its counterparts.展开更多
Evidence-based literature reviews play a vital role in contemporary research,facilitating the synthesis of knowledge from multiple sources to inform decisionmaking and scientific advancements.Within this framework,de-...Evidence-based literature reviews play a vital role in contemporary research,facilitating the synthesis of knowledge from multiple sources to inform decisionmaking and scientific advancements.Within this framework,de-duplication emerges as a part of the process for ensuring the integrity and reliability of evidence extraction.This opinion review delves into the evolution of de-duplication,highlights its importance in evidence synthesis,explores various de-duplication methods,discusses evolving technologies,and proposes best practices.By addressing ethical considerations this paper emphasizes the significance of deduplication as a cornerstone for quality in evidence-based literature reviews.展开更多
基金Supported by the National Natural Science Foundation of China (No.60673001) the State Key Development Program of Basic Research of China (No. 2004CB318203).
文摘Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candidate anchor histogram and the file-type specific knowledge to refine how anchors are determined when performing de- duplication of file data and enforces the selected average chunk size. CAC yields more chunks being found which in turn produces smaller average chtmks and a better reduction in data. We present a detailed evaluation of CAC and the experimental results show that this scheme can improve the compression ratio chunking for file types whose bytes are not randomly distributed (from 11.3% to 16.7% according to different datasets), and improve the write throughput on average by 9.7%.
基金supported in part by Hankuk University of Foreign Studies’Research Fund for 2023 and in part by the National Research Foundation of Korea(NRF)grant funded by the Ministry of Science and ICT Korea No.2021R1F1A1045933.
文摘The Internet of Things(IoT)and cloud technologies have encouraged massive data storage at central repositories.Software-defined networks(SDN)support the processing of data and restrict the transmission of duplicate values.It is necessary to use a data de-duplication mechanism to reduce communication costs and storage overhead.Existing State of the art schemes suffer from computational overhead due to deterministic or random tree-based tags generation which further increases as the file size grows.This paper presents an efficient file-level de-duplication scheme(EFDS)where the cost of creating tags is reduced by employing a hash table with key-value pair for each block of the file.Further,an algorithm for hash table-based duplicate block identification and storage(HDBIS)is presented based on fingerprints that maintain a linked list of similar duplicate blocks on the same index.Hash tables normally have a consistent time complexity for lookup,generating,and deleting stored data regardless of the input size.The experiential results show that the proposed EFDS scheme performs better compared to its counterparts.
文摘Evidence-based literature reviews play a vital role in contemporary research,facilitating the synthesis of knowledge from multiple sources to inform decisionmaking and scientific advancements.Within this framework,de-duplication emerges as a part of the process for ensuring the integrity and reliability of evidence extraction.This opinion review delves into the evolution of de-duplication,highlights its importance in evidence synthesis,explores various de-duplication methods,discusses evolving technologies,and proposes best practices.By addressing ethical considerations this paper emphasizes the significance of deduplication as a cornerstone for quality in evidence-based literature reviews.