Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre...Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.展开更多
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t...This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method展开更多
In order to storage resource of a radar recognition system, schemes for reducing data storage and for correlation discrimination of radar based on wavelet packets were proposed Experiment results at various signal-t...In order to storage resource of a radar recognition system, schemes for reducing data storage and for correlation discrimination of radar based on wavelet packets were proposed Experiment results at various signal-to-noise ratios were given The given.ability of the reduced data method's validity are supported by experimental results. Using optimal basis can get higher successful recognition rate using rigid wavelet basis.展开更多
NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on t...NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data.展开更多
Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)...Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.展开更多
Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. B...Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.展开更多
A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communica...A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.展开更多
Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sens...Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.展开更多
Covert channel of the packet ordering is a hot research topic.Encryption technology is not enough to protect the security of both sides of communication.Covert channel needs to hide the transmission data and protect c...Covert channel of the packet ordering is a hot research topic.Encryption technology is not enough to protect the security of both sides of communication.Covert channel needs to hide the transmission data and protect content of communication.The traditional methods are usually to use proxy technology such as tor anonymous tracking technology to achieve hiding from the communicator.However,because the establishment of proxy communication needs to consume traffic,the communication capacity will be reduced,and in recent years,the tor technology often has vulnerabilities that led to the leakage of secret information.In this paper,the covert channel model of the packet ordering is applied into the distributed system,and a distributed covert channel of the packet ordering enhancement model based on data compression(DCCPOEDC)is proposed.The data compression algorithms are used to reduce the amount of data and transmission time.The distributed system and data compression algorithms can weaken the hidden statistical probability of information.Furthermore,they can enhance the unknowability of the data and weaken the time distribution characteristics of the data packets.This paper selected a compression algorithm suitable for DCCPOEDC and analyzed DCCPOEDC from anonymity,transmission efficiency,and transmission performance.According to the analysis results,it can be seen that DCCPOEDC optimizes the covert channel of the packet ordering,which saves the transmission time and improves the concealment compared with the original covert channel.展开更多
Shannon gave the sampling theorem about the band limited functions in 1948, but the Shannon's theorem cannot adapt to the need of modern high technology. This paper gives a new high speed sampling theorem which ...Shannon gave the sampling theorem about the band limited functions in 1948, but the Shannon's theorem cannot adapt to the need of modern high technology. This paper gives a new high speed sampling theorem which has a fast convergence rate, a high precision, and a simple algorithm. A practical example has been used to verify its efficiency.展开更多
The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based o...The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based on the historical data collected by the buoys with sensing capacities, a novel data compression algorithm called adaptive time piecewise constant vector quantization (ATPCVQ) is proposed to utilize the principal components. The proposed system is capable of lowering the budget of wireless communication and enhancing the lifetime of sensor nodes subject to the constrain of data precision. Furthermore, the proposed algorithm is verified by using the practical data in Qinhuangdao Port of China.展开更多
A new real-time algorithm of data compression, including the segment-normalized logical compression and socalled 'one taken from two samples',is presented for broadband high dynamic seismic recordings. This al...A new real-time algorithm of data compression, including the segment-normalized logical compression and socalled 'one taken from two samples',is presented for broadband high dynamic seismic recordings. This algorithm was tested by numerical simulation and data observed. Its results demonstrate that total errors in recovery data are less than 1% of original data in time domain,0.5% in frequency domain, when using these two methods together.Its compression ratio is greater than 3.The data compression softwares based on the algorithm have been used in the GDS-1000 portable broadband digital seismograph.展开更多
Water vapor monitoring system by Beidou satellite is a new detection system in meteorological department, which makes receiving amount of detected data and data storage and transmission pressure increase. Here, we try...Water vapor monitoring system by Beidou satellite is a new detection system in meteorological department, which makes receiving amount of detected data and data storage and transmission pressure increase. Here, we try to use data compression to relieve pressure. Compres- sion software of water vapor monitoring system by Beidou satellite can be designed into three components: real-time compression software, check compression software and manual compression software, which respectively completes the compression tasks under real-time receiving, in-time check and separate compression, thereby forming a perfect compression system. Taking the design of manual compression software as guide,and using c language to develop,compression test of original receiving data is conducted. Test result proves that the system can carry out batch auto- matic compression, and compression rate can reach 30% ,which can reach the target of saving space in a degree.展开更多
HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acq...HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acquire, store, analyze and index the data. The volume of the data is nearly up to hundreds of million bytes. Besides the hardware and software support, a great capacity of data storage, process and transfer is a more important problem. To deal with this problem, the key technology is data compression algorithm. In the paper, the data format in HT-7 is introduced first, then the data compression algorithm, LZO, being a kind of portable lossless data compression algorithm with ANSI C, is analyzed. This compression algorithm, which fits well with the data acquisition and distribution in the nuclear fusion experiment, offers a pretty fast compression and extremely fast decompression. At last the performance evaluation of LZO application in HT-7 is given.展开更多
Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compr...Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compression. Based on it, a set of seismic data compression software has been developed.展开更多
Synthetic aperture radar (SAR) is portrayed as a multiple access channel. An information theory approach is applied to the SAR imaging system, and the information content about a target that can be extracted from its ...Synthetic aperture radar (SAR) is portrayed as a multiple access channel. An information theory approach is applied to the SAR imaging system, and the information content about a target that can be extracted from its radar image is evaluated by the average mutual information measure. A conditional (transition) probability density function (PDF) of the SAR imaging system is derived by analyzing the system and a closed form of the information content is found. It is shown that the information content obtained by the SAR imaging system from an independent sample of echoes will decrease and the total information content obtained by the SAR imaging system will increase with an increase in the number of looks. Because the total average mutual information is also used to define a measure of radiometric resolution for radar images, it is shown that the radiometric resolution of a radar image of terrain will be improved by spatial averaging. In addition, the imaging process and the data compression process for SAR are each treated as an independent generalized communication channel. The effects of data compression upon radiometric resolution for SAR are studied and some conclusions are obtained.展开更多
In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the f...In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.展开更多
System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to opti...System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.展开更多
Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance...Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.展开更多
DEM data is an important component of spatial database in GIS. The data volume is so huge that compression is necessary. Wavelet transform has many advantages and has become a trend in data compression. Considering th...DEM data is an important component of spatial database in GIS. The data volume is so huge that compression is necessary. Wavelet transform has many advantages and has become a trend in data compression. Considering the simplicity and high efficiency of the compression system, integer wavelet transform is applied to DEM and a simple coding algorithm with high efficiency is introduced. Experiments on a variety of DEM are carried out and some useful rules are presented at the end of this paper.展开更多
基金Supported by the National Natural Science Foundation of China(61076019,61106018)the Aeronautical Science Foundation of China(20115552031)+3 种基金the China Postdoctoral Science Foundation(20100481134)the Jiangsu Province Key Technology R&D Program(BE2010003)the Nanjing University of Aeronautics and Astronautics Research Funding(NS2010115)the Nanjing University of Aeronatics and Astronautics Initial Funding for Talented Faculty(1004-YAH10027)~~
文摘Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.
文摘This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method
文摘In order to storage resource of a radar recognition system, schemes for reducing data storage and for correlation discrimination of radar based on wavelet packets were proposed Experiment results at various signal-to-noise ratios were given The given.ability of the reduced data method's validity are supported by experimental results. Using optimal basis can get higher successful recognition rate using rigid wavelet basis.
基金This project is supported by Provincial Key Project of Science and Technology of Zhejiang(No.2003C21031).
文摘NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data.
文摘Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.
基金The authors would like to acknowledge the support from Project“973”of the State Key Fundamental Research under grant G1998030415.
文摘Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.
文摘A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.
文摘Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.
基金This work is sponsored by the National Natural Science Foundation of China Grant No.61100008Natural Science Foundation of Heilongjiang Province of China under Grant No.LC2016024+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No.17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108.
文摘Covert channel of the packet ordering is a hot research topic.Encryption technology is not enough to protect the security of both sides of communication.Covert channel needs to hide the transmission data and protect content of communication.The traditional methods are usually to use proxy technology such as tor anonymous tracking technology to achieve hiding from the communicator.However,because the establishment of proxy communication needs to consume traffic,the communication capacity will be reduced,and in recent years,the tor technology often has vulnerabilities that led to the leakage of secret information.In this paper,the covert channel model of the packet ordering is applied into the distributed system,and a distributed covert channel of the packet ordering enhancement model based on data compression(DCCPOEDC)is proposed.The data compression algorithms are used to reduce the amount of data and transmission time.The distributed system and data compression algorithms can weaken the hidden statistical probability of information.Furthermore,they can enhance the unknowability of the data and weaken the time distribution characteristics of the data packets.This paper selected a compression algorithm suitable for DCCPOEDC and analyzed DCCPOEDC from anonymity,transmission efficiency,and transmission performance.According to the analysis results,it can be seen that DCCPOEDC optimizes the covert channel of the packet ordering,which saves the transmission time and improves the concealment compared with the original covert channel.
文摘Shannon gave the sampling theorem about the band limited functions in 1948, but the Shannon's theorem cannot adapt to the need of modern high technology. This paper gives a new high speed sampling theorem which has a fast convergence rate, a high precision, and a simple algorithm. A practical example has been used to verify its efficiency.
基金key project of the National Natural Science Foundation of China,Information Acquirement and Publish System of Shipping Lane in Harbor,the fund of Beijing Science and Technology Commission Network Monitoring and Application Demonstration in Food Security,the Program for New Century Excellent Talents in University,National Natural Science Foundation of ChinaProject,Fundamental Research Funds for the Central Universities
文摘The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based on the historical data collected by the buoys with sensing capacities, a novel data compression algorithm called adaptive time piecewise constant vector quantization (ATPCVQ) is proposed to utilize the principal components. The proposed system is capable of lowering the budget of wireless communication and enhancing the lifetime of sensor nodes subject to the constrain of data precision. Furthermore, the proposed algorithm is verified by using the practical data in Qinhuangdao Port of China.
文摘A new real-time algorithm of data compression, including the segment-normalized logical compression and socalled 'one taken from two samples',is presented for broadband high dynamic seismic recordings. This algorithm was tested by numerical simulation and data observed. Its results demonstrate that total errors in recovery data are less than 1% of original data in time domain,0.5% in frequency domain, when using these two methods together.Its compression ratio is greater than 3.The data compression softwares based on the algorithm have been used in the GDS-1000 portable broadband digital seismograph.
文摘Water vapor monitoring system by Beidou satellite is a new detection system in meteorological department, which makes receiving amount of detected data and data storage and transmission pressure increase. Here, we try to use data compression to relieve pressure. Compres- sion software of water vapor monitoring system by Beidou satellite can be designed into three components: real-time compression software, check compression software and manual compression software, which respectively completes the compression tasks under real-time receiving, in-time check and separate compression, thereby forming a perfect compression system. Taking the design of manual compression software as guide,and using c language to develop,compression test of original receiving data is conducted. Test result proves that the system can carry out batch auto- matic compression, and compression rate can reach 30% ,which can reach the target of saving space in a degree.
基金The project supported by the Meg-Science Enineering Project of Chinese Acdemy of Sciences
文摘HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acquire, store, analyze and index the data. The volume of the data is nearly up to hundreds of million bytes. Besides the hardware and software support, a great capacity of data storage, process and transfer is a more important problem. To deal with this problem, the key technology is data compression algorithm. In the paper, the data format in HT-7 is introduced first, then the data compression algorithm, LZO, being a kind of portable lossless data compression algorithm with ANSI C, is analyzed. This compression algorithm, which fits well with the data acquisition and distribution in the nuclear fusion experiment, offers a pretty fast compression and extremely fast decompression. At last the performance evaluation of LZO application in HT-7 is given.
文摘Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compression. Based on it, a set of seismic data compression software has been developed.
文摘Synthetic aperture radar (SAR) is portrayed as a multiple access channel. An information theory approach is applied to the SAR imaging system, and the information content about a target that can be extracted from its radar image is evaluated by the average mutual information measure. A conditional (transition) probability density function (PDF) of the SAR imaging system is derived by analyzing the system and a closed form of the information content is found. It is shown that the information content obtained by the SAR imaging system from an independent sample of echoes will decrease and the total information content obtained by the SAR imaging system will increase with an increase in the number of looks. Because the total average mutual information is also used to define a measure of radiometric resolution for radar images, it is shown that the radiometric resolution of a radar image of terrain will be improved by spatial averaging. In addition, the imaging process and the data compression process for SAR are each treated as an independent generalized communication channel. The effects of data compression upon radiometric resolution for SAR are studied and some conclusions are obtained.
文摘In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.
文摘System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.
基金National Key R&D Program of China,Grant/Award Number:2018AAA0102100National Natural Science Foundation of China,Grant/Award Numbers:62177047,U22A2034+6 种基金International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province,Grant/Award Number:2021CB1013Key Research and Development Program of Hunan Province,Grant/Award Number:2022SK2054111 Project,Grant/Award Number:B18059Natural Science Foundation of Hunan Province,Grant/Award Number:2022JJ30762Fundamental Research Funds for the Central Universities of Central South University,Grant/Award Number:2020zzts143Scientific and Technological Innovation Leading Plan of High‐tech Industry of Hunan Province,Grant/Award Number:2020GK2021Central South University Research Program of Advanced Interdisciplinary Studies,Grant/Award Number:2023QYJC020。
文摘Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.
文摘DEM data is an important component of spatial database in GIS. The data volume is so huge that compression is necessary. Wavelet transform has many advantages and has become a trend in data compression. Considering the simplicity and high efficiency of the compression system, integer wavelet transform is applied to DEM and a simple coding algorithm with high efficiency is introduced. Experiments on a variety of DEM are carried out and some useful rules are presented at the end of this paper.