期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
RFID unreliable data filtering by integrating adaptive sliding Window and Euclidean distance 被引量:4
1
作者 Li-Lan Liu Zi-Long Yuan +2 位作者 Xue-Wei Liu Cheng Chen Ke-Sheng Wang 《Advances in Manufacturing》 SCIE CAS 2014年第2期121-129,共9页
Through improving the redundant data filtering of unreliable data filter for radio frequency identification(RFID) with sliding-window,a data filter which integrates self-adaptive sliding-window and Euclidean distanc... Through improving the redundant data filtering of unreliable data filter for radio frequency identification(RFID) with sliding-window,a data filter which integrates self-adaptive sliding-window and Euclidean distance is proposed.The input data required being filtered have been shunt by considering a large number of redundant data existing in the unreliable data for RFID and the redundant data in RFID are the main filtering object with utilizing the filter based on Euclidean distance.The comparison between the results from the method proposed in this paper and previous research shows that it can improve the accuracy of the RFID for unreliable data filtering and largely reduce the redundant reading rate. 展开更多
关键词 Radio frequency identification(RFID) adaptive sliding window Euclidean distance Redundant data
原文传递
Random Forests Algorithm Based Duplicate Detection in On-Site Programming Big Data Environment 被引量:1
2
作者 Qianqian Li Meng Li +1 位作者 Lei Guo Zhen Zhang 《Journal of Information Hiding and Privacy Protection》 2020年第4期199-205,共7页
On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is e... On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is essential for on-site programming big data.Duplicate data detection is an important step in data cleaning,which can save storage resources and enhance data consistency.Due to the insufficiency in traditional Sorted Neighborhood Method(SNM)and the difficulty of high-dimensional data detection,an optimized algorithm based on random forests with the dynamic and adaptive window size is proposed.The efficiency of the algorithm can be elevated by improving the method of the key-selection,reducing dimension of data set and using an adaptive variable size sliding window.Experimental results show that the improved SNM algorithm exhibits better performance and achieve higher accuracy. 展开更多
关键词 On-site programming big data duplicate record detection random forests adaptive sliding window
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部