Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)...Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.展开更多
NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on t...NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data.展开更多
Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sens...Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.展开更多
Covert channel of the packet ordering is a hot research topic.Encryption technology is not enough to protect the security of both sides of communication.Covert channel needs to hide the transmission data and protect c...Covert channel of the packet ordering is a hot research topic.Encryption technology is not enough to protect the security of both sides of communication.Covert channel needs to hide the transmission data and protect content of communication.The traditional methods are usually to use proxy technology such as tor anonymous tracking technology to achieve hiding from the communicator.However,because the establishment of proxy communication needs to consume traffic,the communication capacity will be reduced,and in recent years,the tor technology often has vulnerabilities that led to the leakage of secret information.In this paper,the covert channel model of the packet ordering is applied into the distributed system,and a distributed covert channel of the packet ordering enhancement model based on data compression(DCCPOEDC)is proposed.The data compression algorithms are used to reduce the amount of data and transmission time.The distributed system and data compression algorithms can weaken the hidden statistical probability of information.Furthermore,they can enhance the unknowability of the data and weaken the time distribution characteristics of the data packets.This paper selected a compression algorithm suitable for DCCPOEDC and analyzed DCCPOEDC from anonymity,transmission efficiency,and transmission performance.According to the analysis results,it can be seen that DCCPOEDC optimizes the covert channel of the packet ordering,which saves the transmission time and improves the concealment compared with the original covert channel.展开更多
Shannon gave the sampling theorem about the band limited functions in 1948, but the Shannon's theorem cannot adapt to the need of modern high technology. This paper gives a new high speed sampling theorem which ...Shannon gave the sampling theorem about the band limited functions in 1948, but the Shannon's theorem cannot adapt to the need of modern high technology. This paper gives a new high speed sampling theorem which has a fast convergence rate, a high precision, and a simple algorithm. A practical example has been used to verify its efficiency.展开更多
A new real-time algorithm of data compression, including the segment-normalized logical compression and socalled 'one taken from two samples',is presented for broadband high dynamic seismic recordings. This al...A new real-time algorithm of data compression, including the segment-normalized logical compression and socalled 'one taken from two samples',is presented for broadband high dynamic seismic recordings. This algorithm was tested by numerical simulation and data observed. Its results demonstrate that total errors in recovery data are less than 1% of original data in time domain,0.5% in frequency domain, when using these two methods together.Its compression ratio is greater than 3.The data compression softwares based on the algorithm have been used in the GDS-1000 portable broadband digital seismograph.展开更多
System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to opti...System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.展开更多
Facing constraints imposed by storage and bandwidth limitations,the vast volume of phasor meas-urement unit(PMU)data collected by the wide-area measurement system(WAMS)for power systems cannot be fully utilized.This l...Facing constraints imposed by storage and bandwidth limitations,the vast volume of phasor meas-urement unit(PMU)data collected by the wide-area measurement system(WAMS)for power systems cannot be fully utilized.This limitation significantly hinders the effective deployment of situational awareness technologies for systematic applications.In this work,an effective curvature quantified Douglas-Peucker(CQDP)-based PMU data compression method is proposed for situational awareness of power systems.First,a curvature integrated distance(CID)for measuring the local flection and fluc-tuation of PMU signals is developed.The Doug-las-Peucker(DP)algorithm integrated with a quan-tile-based parameter adaptation scheme is then proposed to extract feature points for profiling the trends within the PMU signals.This allows adaptive adjustment of the al-gorithm parameters,so as to maintain the desired com-pression ratio and reconstruction accuracy as much as possible,irrespective of the power system dynamics.Fi-nally,case studies on the Western Electricity Coordinat-ing Council(WECC)179-bus system and the actual Guangdong power system are performed to verify the effectiveness of the proposed method.The simulation results show that the proposed method achieves stably higher compression ratio and reconstruction accuracy in both steady state and in transients of the power system,and alleviates the compression performance degradation problem faced by existing compression methods.Index Terms—Curvature quantified Douglas-Peucker,data compression,phasor measurement unit,power sys-tem situational awareness.展开更多
To completely eliminate the time delays caused by phasor data compressions for real-time synchrophasor applications,a real-time synchrophasor data compression(RSDC)is proposed in this paper.The two-way rotation charac...To completely eliminate the time delays caused by phasor data compressions for real-time synchrophasor applications,a real-time synchrophasor data compression(RSDC)is proposed in this paper.The two-way rotation characteristic and elliptical trajectory of dynamic synchrophasors are introduced first to enhance the compressions along with a fast solving method for elliptical trajectory fitting equations.The RSDC for phasor data compression and reconstruction is then proposed by combining the interpolation and extrapolation compressions.The proposed RSDC is verified by both the actual phasor measurement data recorded in a two-phase short-circuit incident and a subsynchronous oscillation incident,and the synthetic dynamic synchrophasors.It is also compared with two previous real-time phasor data compression techniques,i.e.,phasor swing door trending(PSDT)and exception and swing door trending(SDT)data compression(ESDC).The verification results demonstrate that RSDC can achieve significantly higher compression ratios for offline applications with the interpolation and the zero-delay phasor data compression with the extrapolation for real-time applications simultaneously.展开更多
The general concept of data compression consists in removing the redundancy existing in data to find a more compact representation. This paper is concerned with a new method of compression using the second generation ...The general concept of data compression consists in removing the redundancy existing in data to find a more compact representation. This paper is concerned with a new method of compression using the second generation wavelets based on the lifting scheme, which is a simple but powerful wavelet construction method. It has been proved by its successful application to a real-time monitoring system of large hydraulic machines that it is a promising compression method.展开更多
Modern vessels are designed to collect,store and communicate large quantities of ship performance and navigation information through complex onboard data handling processes.That data should be transferred to shore bas...Modern vessels are designed to collect,store and communicate large quantities of ship performance and navigation information through complex onboard data handling processes.That data should be transferred to shore based data centers for further analysis and storage.However,the associated transfer cost in large-scale data sets is a major challenge for the shipping industry,today.The same cost relates to the amount of data that are transferring through various communication networks(i.e.satellites and wireless networks),i.e.between vessels and shore based data centers.Hence,this study proposes to use an autoencoder system architecture(i.e.a deep learning approach)to compress ship performance and navigation parameters(i.e.reduce the number of parameters)and transfer through the respective communication networks as reduced data sets.The data compression is done under the linear version of an autoencoder that consists of principal component analysis(PCA),where the respective principal components(PCs)represent the structure of the data set.The compressed data set is expanded by the same data structure(i.e.an autoencoder system architecture)at the respective data center requiring further analyses and storage.A data set of ship performance and navigation parameters in a selected vessel is analyzed(i.e.data compression and expansion)through an autoencoder system architecture and the results are presented in this study.Furthermore,the respective input and output values of the autoencoder are also compared as statistical distributions and sample number series to evaluate its performance.展开更多
This paper defines second-order and third-order permutation global functions and presents the corresponding higher-order cellular automaton approach to the hyper-parallel undistorted data compression. The genetic algo...This paper defines second-order and third-order permutation global functions and presents the corresponding higher-order cellular automaton approach to the hyper-parallel undistorted data compression. The genetic algorithm is successfully devoted to finding out all the correct local compression rules for the higher-order cellular automaton. The correctness of the higher-order compression rules, the time complexity, and the systolic hardware implementation issue are discussed. In comparison with the first-order automaton method reported, the proposed higher-order approach has much faster compression speed with almost the same degree of cellular structure complexity for hardware implementation.展开更多
This paper proposes an effective method for reducing test data volume undermultiple scan chain designs. The proposed method is based on reduction of distinct scan vectorsusing selective don't-care identification. ...This paper proposes an effective method for reducing test data volume undermultiple scan chain designs. The proposed method is based on reduction of distinct scan vectorsusing selective don't-care identification. Selective don't-care identification is repeatedlyexecuted under condition that each bit of frequent scan vectors is fixed to binary values (0 or 1).Besides, a code extension technique is adopted for improving compression efficiency with keepingdecompressor circuits simple in the manner that the code length for infrequent scan vectors isdesigned as double of that for frequent ones. The effectiveness of the proposed method is shownthrough experiments for ISCAS'89 and ITC'99 benchmark circuits.展开更多
A compression algorithm is proposed in this paper for reducing the size of sensor data. By using a dictionary-based lossless compression algorithm, sensor data can be compressed efficiently and interpreted without dec...A compression algorithm is proposed in this paper for reducing the size of sensor data. By using a dictionary-based lossless compression algorithm, sensor data can be compressed efficiently and interpreted without decompressing. The correlation between redundancy of sensor data and compression ratio is explored. Further, a parallel compression algorithm based on MapReduce [1] is proposed. Meanwhile, data partitioner which plays an important role in performance of MapReduce application is discussed along with performance evaluation criteria proposed in this paper. Experiments demonstrate that random sampler is suitable for highly redundant sensor data and the proposed compression algorithms can compress those highly redundant sensor data efficiently.展开更多
Agricultural robots are flexible to obtain ambient information across large areas of farmland. However, it needs to face two major challenges: data compression and filtering noise. To address these challenges, an enco...Agricultural robots are flexible to obtain ambient information across large areas of farmland. However, it needs to face two major challenges: data compression and filtering noise. To address these challenges, an encoder for ambient data compression, named Tiny-Encoder, was presented to compress and filter raw ambient information, which can be applied to agricultural robots. Tiny-Encoder is based on the operation of convolutions and pooling, and it has a small number of layers and filters. With the aim of evaluating the performance of Tiny-Encoder, different three types of ambient information (including temperature, humidity, and light) were selected to show the performance of compressing raw data and filtering noise. In the task of compressing raw data, Tiny-Encoder obtained higher accuracy (less than the maximum error of sensors ±0.5°C or ±3.5% RH) and more appropriate size (the largest size is 205 KB) than the other two auto-encoders based convolutional operations with different compressed features (including 20, 60, and 200 features). As for filtering noise, Tiny-Encoder has comparable performance with three conventional filtering approaches (including median filtering, Gaussian filtering, and Savitzky-Golay filtering). With large kernel size (i.e., 5), Tiny-Encoder has the best performance among these four filtering approaches: the coefficients of variation with the large kernel (i.e., 5) were 8.6189% (temperature), 10.2684% (humidity), 57.3576% (light), respectively. Overall, Tiny-Encoder can be used for ambient information compression applied to microcontrollers in agricultural information acquisition robots.展开更多
A recursive identification method is proposed to obtain continuous-time state-space models in systems with nonuniformly sampled (NUS) data. Due to the nonuniform sampling feature, the time interval from one recursio...A recursive identification method is proposed to obtain continuous-time state-space models in systems with nonuniformly sampled (NUS) data. Due to the nonuniform sampling feature, the time interval from one recursion step to the next varies and the parameter is always updated partially at each step. Furthermore, this identification method is applied to form a combined data compression method in NUS processes. The data to be compressed are first classified with respect to a series of potentially existing (possibly time-varying) models, and then modeled by the NUS identification method. The model parameters are stored instead of the identification output data, which makes the first compression. Subsequently, as the second step, the conventional swinging door trending method is carried out on the data from the first step. Numeric results from simulation as well as practical data are given, showing the effectiveness of the proposed identification method and fold increase of compression ratio achieved by the combined data compression method.展开更多
This paper presents an improved test data compression scheme based on a combination of test data compatibility and dictionary for multi-scan designs to reduce test data volume and thus test cost. The proposed method i...This paper presents an improved test data compression scheme based on a combination of test data compatibility and dictionary for multi-scan designs to reduce test data volume and thus test cost. The proposed method includes two steps. First a drive bit matrix with less columns is generated by the compatibilities between the columns of the initial scan bit matrix, also the inverse compatibilities and the logic dependencies between the columns of mid bit matrixes. Secondly a dictionary bit matrix with limited rows is constructed, which has the properties that for each row of the drive bit matrix, a compatible row exists or can be generated by XOR operation of multiple rows in the dictionary bit matrix and the total number of rows used to compute all compatible rows is minimal. The rows in the dictionary matrix are encoded to further reduce the number of ATE channels and test data volume. Experimental results for the large ISCAS 89 benchmarks show that the proposed method significantly reduces test data volume for multi-scan designs.展开更多
The Dark Matter Particle Explorer(DAMPE) is an upcoming scientific satellite mission for high energy gamma-ray, electron and cosmic ray detection. The silicon tracker(STK) is a subdetector of the DAMPE payload.It ...The Dark Matter Particle Explorer(DAMPE) is an upcoming scientific satellite mission for high energy gamma-ray, electron and cosmic ray detection. The silicon tracker(STK) is a subdetector of the DAMPE payload.It has excellent position resolution(readout pitch of 242 μm), and measures the incident direction of particles as well as charge. The STK consists of 12 layers of Silicon Micro-strip Detector(SMD), equivalent to a total silicon area of6.5 m2. The total number of readout channels of the STK is 73728, which leads to a huge amount of raw data to be processed. In this paper, we focus on the on-board data compression algorithm and procedure in the STK, and show the results of initial verification by cosmic-ray measurements.展开更多
A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic ...A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper. By using this algorithm, an image can be condensely described as a fractal transform operator which is the combination of a set of fractal mappings. When the fractal transform operator is iteratedly applied to any initial image, a unique attractor (reconstructed image) can be achieved. In this paper) a dynamic fractal transform is presented which is a modification of the static transform. Instead of being fixed, the dynamic transform operator varies in each decoder iteration, thus differs from static transform operators. The new transform has advantages in improving coding efficiency and shows better convergence for the decoder.展开更多
Synthetic aperture radar (SAR) is portrayed as a multiple access channel. An information theory approach is applied to the SAR imaging system, and the information content about a target that can be extracted from its ...Synthetic aperture radar (SAR) is portrayed as a multiple access channel. An information theory approach is applied to the SAR imaging system, and the information content about a target that can be extracted from its radar image is evaluated by the average mutual information measure. A conditional (transition) probability density function (PDF) of the SAR imaging system is derived by analyzing the system and a closed form of the information content is found. It is shown that the information content obtained by the SAR imaging system from an independent sample of echoes will decrease and the total information content obtained by the SAR imaging system will increase with an increase in the number of looks. Because the total average mutual information is also used to define a measure of radiometric resolution for radar images, it is shown that the radiometric resolution of a radar image of terrain will be improved by spatial averaging. In addition, the imaging process and the data compression process for SAR are each treated as an independent generalized communication channel. The effects of data compression upon radiometric resolution for SAR are studied and some conclusions are obtained.展开更多
文摘Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.
基金This project is supported by Provincial Key Project of Science and Technology of Zhejiang(No.2003C21031).
文摘NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data.
文摘Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.
基金This work is sponsored by the National Natural Science Foundation of China Grant No.61100008Natural Science Foundation of Heilongjiang Province of China under Grant No.LC2016024+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No.17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108.
文摘Covert channel of the packet ordering is a hot research topic.Encryption technology is not enough to protect the security of both sides of communication.Covert channel needs to hide the transmission data and protect content of communication.The traditional methods are usually to use proxy technology such as tor anonymous tracking technology to achieve hiding from the communicator.However,because the establishment of proxy communication needs to consume traffic,the communication capacity will be reduced,and in recent years,the tor technology often has vulnerabilities that led to the leakage of secret information.In this paper,the covert channel model of the packet ordering is applied into the distributed system,and a distributed covert channel of the packet ordering enhancement model based on data compression(DCCPOEDC)is proposed.The data compression algorithms are used to reduce the amount of data and transmission time.The distributed system and data compression algorithms can weaken the hidden statistical probability of information.Furthermore,they can enhance the unknowability of the data and weaken the time distribution characteristics of the data packets.This paper selected a compression algorithm suitable for DCCPOEDC and analyzed DCCPOEDC from anonymity,transmission efficiency,and transmission performance.According to the analysis results,it can be seen that DCCPOEDC optimizes the covert channel of the packet ordering,which saves the transmission time and improves the concealment compared with the original covert channel.
文摘Shannon gave the sampling theorem about the band limited functions in 1948, but the Shannon's theorem cannot adapt to the need of modern high technology. This paper gives a new high speed sampling theorem which has a fast convergence rate, a high precision, and a simple algorithm. A practical example has been used to verify its efficiency.
文摘A new real-time algorithm of data compression, including the segment-normalized logical compression and socalled 'one taken from two samples',is presented for broadband high dynamic seismic recordings. This algorithm was tested by numerical simulation and data observed. Its results demonstrate that total errors in recovery data are less than 1% of original data in time domain,0.5% in frequency domain, when using these two methods together.Its compression ratio is greater than 3.The data compression softwares based on the algorithm have been used in the GDS-1000 portable broadband digital seismograph.
文摘System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.
基金supported by the National Natural Sci-ence Foundation of China(No.52077195).
文摘Facing constraints imposed by storage and bandwidth limitations,the vast volume of phasor meas-urement unit(PMU)data collected by the wide-area measurement system(WAMS)for power systems cannot be fully utilized.This limitation significantly hinders the effective deployment of situational awareness technologies for systematic applications.In this work,an effective curvature quantified Douglas-Peucker(CQDP)-based PMU data compression method is proposed for situational awareness of power systems.First,a curvature integrated distance(CID)for measuring the local flection and fluc-tuation of PMU signals is developed.The Doug-las-Peucker(DP)algorithm integrated with a quan-tile-based parameter adaptation scheme is then proposed to extract feature points for profiling the trends within the PMU signals.This allows adaptive adjustment of the al-gorithm parameters,so as to maintain the desired com-pression ratio and reconstruction accuracy as much as possible,irrespective of the power system dynamics.Fi-nally,case studies on the Western Electricity Coordinat-ing Council(WECC)179-bus system and the actual Guangdong power system are performed to verify the effectiveness of the proposed method.The simulation results show that the proposed method achieves stably higher compression ratio and reconstruction accuracy in both steady state and in transients of the power system,and alleviates the compression performance degradation problem faced by existing compression methods.Index Terms—Curvature quantified Douglas-Peucker,data compression,phasor measurement unit,power sys-tem situational awareness.
基金supported by Fundamental Research Funds for the Central Universities(No.2019RC006)National Natural Science Foundation of China(No.52077004)。
文摘To completely eliminate the time delays caused by phasor data compressions for real-time synchrophasor applications,a real-time synchrophasor data compression(RSDC)is proposed in this paper.The two-way rotation characteristic and elliptical trajectory of dynamic synchrophasors are introduced first to enhance the compressions along with a fast solving method for elliptical trajectory fitting equations.The RSDC for phasor data compression and reconstruction is then proposed by combining the interpolation and extrapolation compressions.The proposed RSDC is verified by both the actual phasor measurement data recorded in a two-phase short-circuit incident and a subsynchronous oscillation incident,and the synthetic dynamic synchrophasors.It is also compared with two previous real-time phasor data compression techniques,i.e.,phasor swing door trending(PSDT)and exception and swing door trending(SDT)data compression(ESDC).The verification results demonstrate that RSDC can achieve significantly higher compression ratios for offline applications with the interpolation and the zero-delay phasor data compression with the extrapolation for real-time applications simultaneously.
文摘The general concept of data compression consists in removing the redundancy existing in data to find a more compact representation. This paper is concerned with a new method of compression using the second generation wavelets based on the lifting scheme, which is a simple but powerful wavelet construction method. It has been proved by its successful application to a real-time monitoring system of large hydraulic machines that it is a promising compression method.
基金This work has been conducted under the project of“SFI Smart Maritime(237917/O30)-Norwegian Centre for im-proved energy-efficiency and reduced emissions from the mar-itime sector”that is partly funded by the Research Council of NorwayAn initial version of this paper is presented at the 35th International Conference on Ocean,Offshore and Arc-tic Engineering(OMAE 2016),Busan,Korea,June,2016,(OMAE2016-54093).
文摘Modern vessels are designed to collect,store and communicate large quantities of ship performance and navigation information through complex onboard data handling processes.That data should be transferred to shore based data centers for further analysis and storage.However,the associated transfer cost in large-scale data sets is a major challenge for the shipping industry,today.The same cost relates to the amount of data that are transferring through various communication networks(i.e.satellites and wireless networks),i.e.between vessels and shore based data centers.Hence,this study proposes to use an autoencoder system architecture(i.e.a deep learning approach)to compress ship performance and navigation parameters(i.e.reduce the number of parameters)and transfer through the respective communication networks as reduced data sets.The data compression is done under the linear version of an autoencoder that consists of principal component analysis(PCA),where the respective principal components(PCs)represent the structure of the data set.The compressed data set is expanded by the same data structure(i.e.an autoencoder system architecture)at the respective data center requiring further analyses and storage.A data set of ship performance and navigation parameters in a selected vessel is analyzed(i.e.data compression and expansion)through an autoencoder system architecture and the results are presented in this study.Furthermore,the respective input and output values of the autoencoder are also compared as statistical distributions and sample number series to evaluate its performance.
基金the National Natural Science Foundation of China under Grant !.69773037Foundational R&D Plan of China under Grant!G1999D3270
文摘This paper defines second-order and third-order permutation global functions and presents the corresponding higher-order cellular automaton approach to the hyper-parallel undistorted data compression. The genetic algorithm is successfully devoted to finding out all the correct local compression rules for the higher-order cellular automaton. The correctness of the higher-order compression rules, the time complexity, and the systolic hardware implementation issue are discussed. In comparison with the first-order automaton method reported, the proposed higher-order approach has much faster compression speed with almost the same degree of cellular structure complexity for hardware implementation.
文摘This paper proposes an effective method for reducing test data volume undermultiple scan chain designs. The proposed method is based on reduction of distinct scan vectorsusing selective don't-care identification. Selective don't-care identification is repeatedlyexecuted under condition that each bit of frequent scan vectors is fixed to binary values (0 or 1).Besides, a code extension technique is adopted for improving compression efficiency with keepingdecompressor circuits simple in the manner that the code length for infrequent scan vectors isdesigned as double of that for frequent ones. The effectiveness of the proposed method is shownthrough experiments for ISCAS'89 and ITC'99 benchmark circuits.
基金supported by the National Natural Science Foundation of China(60933011,61170258)
文摘A compression algorithm is proposed in this paper for reducing the size of sensor data. By using a dictionary-based lossless compression algorithm, sensor data can be compressed efficiently and interpreted without decompressing. The correlation between redundancy of sensor data and compression ratio is explored. Further, a parallel compression algorithm based on MapReduce [1] is proposed. Meanwhile, data partitioner which plays an important role in performance of MapReduce application is discussed along with performance evaluation criteria proposed in this paper. Experiments demonstrate that random sampler is suitable for highly redundant sensor data and the proposed compression algorithms can compress those highly redundant sensor data efficiently.
基金This work was financially supported by the National Key Research and Development Program(Grant No.2019YFE0125500)the Chinese University Scientific Fund(Grant No.2021TC111).
文摘Agricultural robots are flexible to obtain ambient information across large areas of farmland. However, it needs to face two major challenges: data compression and filtering noise. To address these challenges, an encoder for ambient data compression, named Tiny-Encoder, was presented to compress and filter raw ambient information, which can be applied to agricultural robots. Tiny-Encoder is based on the operation of convolutions and pooling, and it has a small number of layers and filters. With the aim of evaluating the performance of Tiny-Encoder, different three types of ambient information (including temperature, humidity, and light) were selected to show the performance of compressing raw data and filtering noise. In the task of compressing raw data, Tiny-Encoder obtained higher accuracy (less than the maximum error of sensors ±0.5°C or ±3.5% RH) and more appropriate size (the largest size is 205 KB) than the other two auto-encoders based convolutional operations with different compressed features (including 20, 60, and 200 features). As for filtering noise, Tiny-Encoder has comparable performance with three conventional filtering approaches (including median filtering, Gaussian filtering, and Savitzky-Golay filtering). With large kernel size (i.e., 5), Tiny-Encoder has the best performance among these four filtering approaches: the coefficients of variation with the large kernel (i.e., 5) were 8.6189% (temperature), 10.2684% (humidity), 57.3576% (light), respectively. Overall, Tiny-Encoder can be used for ambient information compression applied to microcontrollers in agricultural information acquisition robots.
文摘A recursive identification method is proposed to obtain continuous-time state-space models in systems with nonuniformly sampled (NUS) data. Due to the nonuniform sampling feature, the time interval from one recursion step to the next varies and the parameter is always updated partially at each step. Furthermore, this identification method is applied to form a combined data compression method in NUS processes. The data to be compressed are first classified with respect to a series of potentially existing (possibly time-varying) models, and then modeled by the NUS identification method. The model parameters are stored instead of the identification output data, which makes the first compression. Subsequently, as the second step, the conventional swinging door trending method is carried out on the data from the first step. Numeric results from simulation as well as practical data are given, showing the effectiveness of the proposed identification method and fold increase of compression ratio achieved by the combined data compression method.
基金the National Natural Science Foundation of China (Nos. 90207018 and 60576030)
文摘This paper presents an improved test data compression scheme based on a combination of test data compatibility and dictionary for multi-scan designs to reduce test data volume and thus test cost. The proposed method includes two steps. First a drive bit matrix with less columns is generated by the compatibilities between the columns of the initial scan bit matrix, also the inverse compatibilities and the logic dependencies between the columns of mid bit matrixes. Secondly a dictionary bit matrix with limited rows is constructed, which has the properties that for each row of the drive bit matrix, a compatible row exists or can be generated by XOR operation of multiple rows in the dictionary bit matrix and the total number of rows used to compute all compatible rows is minimal. The rows in the dictionary matrix are encoded to further reduce the number of ATE channels and test data volume. Experimental results for the large ISCAS 89 benchmarks show that the proposed method significantly reduces test data volume for multi-scan designs.
基金Supported by Strategic Priority Research Program on Space Science of Chinese Academy of Sciences(XDA040402)National Natural Science Foundation of China(1111403027)
文摘The Dark Matter Particle Explorer(DAMPE) is an upcoming scientific satellite mission for high energy gamma-ray, electron and cosmic ray detection. The silicon tracker(STK) is a subdetector of the DAMPE payload.It has excellent position resolution(readout pitch of 242 μm), and measures the incident direction of particles as well as charge. The STK consists of 12 layers of Silicon Micro-strip Detector(SMD), equivalent to a total silicon area of6.5 m2. The total number of readout channels of the STK is 73728, which leads to a huge amount of raw data to be processed. In this paper, we focus on the on-board data compression algorithm and procedure in the STK, and show the results of initial verification by cosmic-ray measurements.
文摘A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper. By using this algorithm, an image can be condensely described as a fractal transform operator which is the combination of a set of fractal mappings. When the fractal transform operator is iteratedly applied to any initial image, a unique attractor (reconstructed image) can be achieved. In this paper) a dynamic fractal transform is presented which is a modification of the static transform. Instead of being fixed, the dynamic transform operator varies in each decoder iteration, thus differs from static transform operators. The new transform has advantages in improving coding efficiency and shows better convergence for the decoder.
文摘Synthetic aperture radar (SAR) is portrayed as a multiple access channel. An information theory approach is applied to the SAR imaging system, and the information content about a target that can be extracted from its radar image is evaluated by the average mutual information measure. A conditional (transition) probability density function (PDF) of the SAR imaging system is derived by analyzing the system and a closed form of the information content is found. It is shown that the information content obtained by the SAR imaging system from an independent sample of echoes will decrease and the total information content obtained by the SAR imaging system will increase with an increase in the number of looks. Because the total average mutual information is also used to define a measure of radiometric resolution for radar images, it is shown that the radiometric resolution of a radar image of terrain will be improved by spatial averaging. In addition, the imaging process and the data compression process for SAR are each treated as an independent generalized communication channel. The effects of data compression upon radiometric resolution for SAR are studied and some conclusions are obtained.