摘要
云计算环境下海量数据存储系统中,大量的冗余数据通过高维特征量的形式集成在存储空间中,带来了高额的存储负荷开销,需要进行海量数据的低负荷存储方法设计。传统方法采用云计算数据存储分块解压算法实现对海量数据的低负荷存储,算法需要对数据进行前期分类处理,增大了计算的复杂度,对动态副本数据存储开销较大。提出一种Dopplerlet自适应匹配塔形分解和K-L特征压缩的云计算海量数据低负荷存储算法。结合Dopplerlet变换在寻找最佳基函数的全局优化性,进行云存储海量数据的信息特征提取和冗余信息过滤预处理,对海量数据进行Dopplerlet自适应匹配塔形分解,得到海量数据的离散样本频谱特征,采用K-L特征压缩进行云计算环境下的数据存储减负荷处理,实现算法改进。仿真结果表明,采用改进算法能有效提高云计算环境下海量数据存储的容量,降低了数据存储执行开销和负荷程度。
In cloud computing environment, a large amount of redundant data is integrated in the storage space with high dimensional feature. It has brought a lot of storage load, which needs to be designed for mass data storage. The traditional method uses cloud computing data storage to realize the low load storage of massive data. The algorithm needs to classify the data and increase the complexity of computation. A low load storage algorithm of cloud computing massive data is presented based on Dopplerlet adaptive decomposition with pyramid matching and K - L feature compression. Combined with Dopplerlet transform in searching global optimization of the best basis function, the information feature of cloud storage massive data is extracted, and the redundant information is filtered by pre - processing. The massive data are decomposited by the adaptive Dopplerlet matching pyramid decomposition, and spectral characteristics of discrete samples in massive data are obtained. The K - L feature compression is used in load reduction of data storage under the cloud computing environment, so as to improve the algorithm. Simulation resuits show that the proposed algorithm can effectively improve the capacity of massive data storage in cloud computing environment, and reduce the cost of data storage implementation and load level.
出处
《计算机仿真》
CSCD
北大核心
2016年第4期390-394,共5页
Computer Simulation
基金
长春师范大学自然科学项目(合字[2014]第010号)
关键词
云计算
海量数据
存储
特征压缩
Cloud computing
Massive data
Storage
Feature compression