In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redun-dancy, which results fr...In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redun-dancy, which results from the neighbouring Huffman coded bits. Simulations demonstrate that in the presence of source redundancy, the proposed algorithm gives better performance than the Separate Source and Channel Decoding algorithm (SSCD).展开更多
This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman codi...This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0 5-1.0 dB with a limi...展开更多
This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. Th...This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n, s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools.展开更多
Recently,reversible data hiding in encrypted images(RDHEI)based on pixel prediction has been a hot topic.However,existing schemes still employ a pixel predictor that ignores pixel changes in the diagonal direction dur...Recently,reversible data hiding in encrypted images(RDHEI)based on pixel prediction has been a hot topic.However,existing schemes still employ a pixel predictor that ignores pixel changes in the diagonal direction during prediction,and the pixel labeling scheme is inflexible.To solve these problems,this paper proposes reversible data hiding in encrypted images based on adaptive prediction and labeling.First,we design an adaptive gradient prediction(AGP),which uses eight adjacent pixels and combines four scanning methods(i.e.,horizontal,vertical,diagonal,and diagonal)for prediction.AGP can adaptively adjust the weight of the linear prediction model according to the weight of the edge attribute of the pixel,which improves the prediction ability of the predictor for complex images.At the same time,we adopt an adaptive huffman coding labeling scheme,which can adaptively generate huffman codes for labeling according to different images,effectively improving the scheme’s embedding performance on the dataset.The experimental results show that the algorithm has a higher embedding rate.The embedding rate on the test image Jetplane is 4.2102 bpp,and the average embedding rate on the image dataset Bossbase is 3.8625 bpp.展开更多
The Rate Distortion Optimization(RDO)algorithm in High Efficiency Video Coding(HEVC)has many iterations and a large number of calculations.In order to decrease the calculation time and meet the requirements of fast sw...The Rate Distortion Optimization(RDO)algorithm in High Efficiency Video Coding(HEVC)has many iterations and a large number of calculations.In order to decrease the calculation time and meet the requirements of fast switching of RDO algorithms of different scales,an RDO dynamic reconfigurable structure is proposed.First,the Quantization Parameter(QP)and bit rate values were loaded through an H⁃tree Configurable Network(HCN),and the execution status of the array was detected in real time.When the switching request of the RDO algorithm was detected,the corresponding configuration information was delivered.This self⁃reconfiguration implementation method improved the flexibility and utilization of hardware.Experimental results show that when the control bit width was only increased by 31.25%,the designed configuration network could increase the number of controllable processing units by 32 times,and the execution cycle was 50%lower than the same type of design.Compared with previous RDO algorithm,the RDO algorithm implemented on the reconfigurable array based on the configuration network had an average operating frequency increase of 12.5%and an area reduction of 56.4%.展开更多
In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. ...In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. As a result, large error in the aggregate computed at the base station due to false sub aggregate values contributed by compromised nodes. When falsified event messages forwarded through intermediate nodes lead to wastage of their limited energy too. Since wireless sensor nodes are battery operated, it has low computational power and energy. In view of this, the algorithms designed for wireless sensor nodes should be such that, they extend the lifetime, use less computation and enhance security so as to enhance the network life time. This article presents Vernam Cipher cryptographic technique based data compression algorithm using huff man source coding scheme in order to enhance security and lifetime of the energy constrained wireless sensor nodes. In addition, this scheme is evaluated by using different processor based sensor node implementations and the results are compared against to other existing schemes. In particular, we present a secure light weight algorithm for the wireless sensor nodes which are consuming less energy for its operation. Using this, the entropy improvement is achieved to a greater extend.展开更多
In the compression of massive compound power quality disturbance(PQD) signals in active distribution networks, the compression ratio(CR) and reconstruction error(RE) act as a pair of contradictory indicators, and trad...In the compression of massive compound power quality disturbance(PQD) signals in active distribution networks, the compression ratio(CR) and reconstruction error(RE) act as a pair of contradictory indicators, and traditional compression algorithms have difficulties in simultaneously satisfying a high CR and low RE. To improve the CR and reduce the RE, a hybrid compression method that combines a strong tracking Kalman filter(STKF), sparse decomposition, Huffman coding, and run-length coding is proposed in this study. This study first uses a sparse decomposition algorithm based on a joint dictionary to separate the transient component(TC) and the steady-state component(SSC) in the PQD. The TC is then compressed by wavelet analysis and by Huffman and runlength coding algorithms. For the SSC, values that are greater than the threshold are reserved, and the compression is finally completed. In addition, the threshold of the wavelet depends on the fading factor of the STKF to obtain a high CR. Experimental results of real-life signals measured by fault recorders in a dynamic simulation laboratory show that the CR of the proposed method reaches as high as 50 and the RE is approximately 1.6%, which are better than those of competing methods. These results demonstrate the immunity of the proposed method to the interference of Gaussian noise and sampling frequency.展开更多
文摘In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redun-dancy, which results from the neighbouring Huffman coded bits. Simulations demonstrate that in the presence of source redundancy, the proposed algorithm gives better performance than the Separate Source and Channel Decoding algorithm (SSCD).
文摘This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0 5-1.0 dB with a limi...
文摘This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n, s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools.
基金This work was supported in part by the Natural Science Foundation of Hunan Province(No.2020JJ4141),author X.X,http://kjt.hunan.gov.cn/in part by the Postgraduate Excellent teaching team Project of Hunan Province under Grant(No.ZJWKT202204),author J.Q,http://zfsg.gd.gov.cn/xxfb/ywsd/index.html.
文摘Recently,reversible data hiding in encrypted images(RDHEI)based on pixel prediction has been a hot topic.However,existing schemes still employ a pixel predictor that ignores pixel changes in the diagonal direction during prediction,and the pixel labeling scheme is inflexible.To solve these problems,this paper proposes reversible data hiding in encrypted images based on adaptive prediction and labeling.First,we design an adaptive gradient prediction(AGP),which uses eight adjacent pixels and combines four scanning methods(i.e.,horizontal,vertical,diagonal,and diagonal)for prediction.AGP can adaptively adjust the weight of the linear prediction model according to the weight of the edge attribute of the pixel,which improves the prediction ability of the predictor for complex images.At the same time,we adopt an adaptive huffman coding labeling scheme,which can adaptively generate huffman codes for labeling according to different images,effectively improving the scheme’s embedding performance on the dataset.The experimental results show that the algorithm has a higher embedding rate.The embedding rate on the test image Jetplane is 4.2102 bpp,and the average embedding rate on the image dataset Bossbase is 3.8625 bpp.
基金Sponsored by the National Natural Science Foundation of China(Grant Nos.61834005,61772417,61802304,61602377,and 61634004)the Shaanxi Province Coordination Innovation Project of Science and Technology(Grant No.2016KTZDGY02-04-02)+1 种基金the Shaanxi Provincial Key R&D Plan(Grant No.2017GY-060)the Shaanxi International Science and Technology Cooperation Program(Grant No.2018KW-006).
文摘The Rate Distortion Optimization(RDO)algorithm in High Efficiency Video Coding(HEVC)has many iterations and a large number of calculations.In order to decrease the calculation time and meet the requirements of fast switching of RDO algorithms of different scales,an RDO dynamic reconfigurable structure is proposed.First,the Quantization Parameter(QP)and bit rate values were loaded through an H⁃tree Configurable Network(HCN),and the execution status of the array was detected in real time.When the switching request of the RDO algorithm was detected,the corresponding configuration information was delivered.This self⁃reconfiguration implementation method improved the flexibility and utilization of hardware.Experimental results show that when the control bit width was only increased by 31.25%,the designed configuration network could increase the number of controllable processing units by 32 times,and the execution cycle was 50%lower than the same type of design.Compared with previous RDO algorithm,the RDO algorithm implemented on the reconfigurable array based on the configuration network had an average operating frequency increase of 12.5%and an area reduction of 56.4%.
文摘In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. As a result, large error in the aggregate computed at the base station due to false sub aggregate values contributed by compromised nodes. When falsified event messages forwarded through intermediate nodes lead to wastage of their limited energy too. Since wireless sensor nodes are battery operated, it has low computational power and energy. In view of this, the algorithms designed for wireless sensor nodes should be such that, they extend the lifetime, use less computation and enhance security so as to enhance the network life time. This article presents Vernam Cipher cryptographic technique based data compression algorithm using huff man source coding scheme in order to enhance security and lifetime of the energy constrained wireless sensor nodes. In addition, this scheme is evaluated by using different processor based sensor node implementations and the results are compared against to other existing schemes. In particular, we present a secure light weight algorithm for the wireless sensor nodes which are consuming less energy for its operation. Using this, the entropy improvement is achieved to a greater extend.
基金supported in part by the National Natural Science Foundation of China (No.52077089)。
文摘In the compression of massive compound power quality disturbance(PQD) signals in active distribution networks, the compression ratio(CR) and reconstruction error(RE) act as a pair of contradictory indicators, and traditional compression algorithms have difficulties in simultaneously satisfying a high CR and low RE. To improve the CR and reduce the RE, a hybrid compression method that combines a strong tracking Kalman filter(STKF), sparse decomposition, Huffman coding, and run-length coding is proposed in this study. This study first uses a sparse decomposition algorithm based on a joint dictionary to separate the transient component(TC) and the steady-state component(SSC) in the PQD. The TC is then compressed by wavelet analysis and by Huffman and runlength coding algorithms. For the SSC, values that are greater than the threshold are reserved, and the compression is finally completed. In addition, the threshold of the wavelet depends on the fading factor of the STKF to obtain a high CR. Experimental results of real-life signals measured by fault recorders in a dynamic simulation laboratory show that the CR of the proposed method reaches as high as 50 and the RE is approximately 1.6%, which are better than those of competing methods. These results demonstrate the immunity of the proposed method to the interference of Gaussian noise and sampling frequency.