This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe...Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.展开更多
A joint signature,encryption and error correction public-key cryptosystem is pre-sented based on an NP-completeness problem-the decoding problem of general linear codes inalgebraic coding theory,
This work investigates the performance of various forward error correction codes, by which the MIMO-OFDM system is deployed. To ensure fair investigation, the performance of four modulations, namely, binary phase shif...This work investigates the performance of various forward error correction codes, by which the MIMO-OFDM system is deployed. To ensure fair investigation, the performance of four modulations, namely, binary phase shift keying(BPSK), quadrature phase shift keying(QPSK), quadrature amplitude modulation(QAM)-16 and QAM-64 with four error correction codes(convolutional code(CC), Reed-Solomon code(RSC)+CC, low density parity check(LDPC)+CC, Turbo+CC) is studied under three channel models(additive white Guassian noise(AWGN), Rayleigh, Rician) and three different antenna configurations(2×2, 2×4, 4×4). The bit error rate(BER) and the peak signal to noise ratio(PSNR) are taken as the measures of performance. The binary data and the color image data are transmitted and the graphs are plotted for various modulations with different channels and error correction codes. Analysis on the performance measures confirm that the Turbo + CC code in 4×4 configurations exhibits better performance.展开更多
In this paper, we present two new algorithms in residue number systems for scaling and error correction. The first algorithm is the Cyclic Property of Residue-Digit Difference (CPRDD). It is used to speed up the resid...In this paper, we present two new algorithms in residue number systems for scaling and error correction. The first algorithm is the Cyclic Property of Residue-Digit Difference (CPRDD). It is used to speed up the residue multiple error correction due to its parallel processes. The second is called the Target Race Distance (TRD). It is used to speed up residue scaling. Both of these two algorithms are used without the need for Mixed Radix Conversion (MRC) or Chinese Residue Theorem (CRT) techniques, which are time consuming and require hardware complexity. Furthermore, the residue scaling can be performed in parallel for any combination of moduli set members without using lookup tables.展开更多
This paper demonstrates how channel coding can improve the robustness of spatial image watermarks against signal distortion caused by lossy data compression such as the JPEG scheme by taking advantage of the propertie...This paper demonstrates how channel coding can improve the robustness of spatial image watermarks against signal distortion caused by lossy data compression such as the JPEG scheme by taking advantage of the properties of Gray code. Two error-correction coding (ECC) schemes are used here: One scheme, referred to as the vertical ECC (VECC), is to encode information bits in a pixel by error-correction coding where the Gray code is used to improve the performance. The other scheme, referred to as the horizontal ECC (HECC), is to encode information bits in an image plane. In watermarking, HECC generates a codeword representing watermark bits, and each bit of the codeword is encoded by VECC. Simple single-error-correcting block codes are used in VECC and HECC. Several experiments of these schemes were conducted on test images. The result demonstrates that the error-correcting performance of HECC just depends on that of VECC, and accordingly, HECC enhances the capability of VECC. Consequently, HECC with appropriate codes can achieve stronger robustness to JPEG—caused distortions than non-channel-coding watermarking schemes.展开更多
In this paper, error-correction coding (ECC) in Gray codes is considered and its performance in the protecting of spatial image watermarks against lossy data compression is demonstrated. For this purpose, the differen...In this paper, error-correction coding (ECC) in Gray codes is considered and its performance in the protecting of spatial image watermarks against lossy data compression is demonstrated. For this purpose, the differences between bit patterns of two Gray codewords are analyzed in detail. On the basis of the properties, a method for encoding watermark bits in the Gray codewords that represent signal levels by a single-error-correcting (SEC) code is developed, which is referred to as the Gray-ECC method in this paper. The two codewords of the SEC code corresponding to respective watermark bits are determined so as to minimize the expected amount of distortion caused by the watermark embedding. The stochastic analyses show that an error-correcting capacity of the Gray-ECC method is superior to that of the ECC in natural binary codes for changes in signal codewords. Experiments of the Gray-ECC method were conducted on 8-bit monochrome images to evaluate both the features of watermarked images and the performance of robustness for image distortion resulting from the JPEG DCT-baseline coding scheme. The results demonstrate that, compared with a conventional averaging-based method, the Gray-ECC method yields watermarked images with less amount of signal distortion and also makes the watermark comparably robust for lossy data compression.展开更多
Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum err...Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum error correction,we need to find a fast and close to the optimal threshold decoder.In this work,we build a convolutional neural network(CNN)decoder to correct errors in the toric code based on the system research of machine learning.We analyze and optimize various conditions that affect CNN,and use the RestNet network architecture to reduce the running time.It is shortened by 30%-40%,and we finally design an optimized algorithm for CNN decoder.In this way,the threshold accuracy of the neural network decoder is made to reach 10.8%,which is closer to the optimal threshold of about 11%.The previous threshold of 8.9%-10.3%has been slightly improved,and there is no need to verify the basic noise.展开更多
In the Wyner-Ziv(WZ) video coding paradigm, a virtual correlation channel is assumed between the quantized source and the side information(SI) at the decoder, and channel coding is applied to achieve compression. In t...In the Wyner-Ziv(WZ) video coding paradigm, a virtual correlation channel is assumed between the quantized source and the side information(SI) at the decoder, and channel coding is applied to achieve compression. In this paper, errors caused by the virtual correlation channel are addressed and an error concealment approach is proposed for pixel-based WZ video coding. In the approach, errors after decoding are classified into two types. Type 1 errors are caused by residual bit errors after channel decoding, while type 2 errors are due to low quality of SI in part of a frame which causes SI not lying within the quantization bin of a decoded quantized pixel value. Two separate strategies are respectively designed to detect and conceal the two types of errors. Simulations are carried out and results are presented to demonstrate the effectiveness of the proposed approach.展开更多
Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptab...Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.展开更多
A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the we...A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the weighted coefficient of the Chien search method is calculated sequentially through the three pipelined stages of the decoder. And therefore, the computation of the errata locator polynomial and errata evaluator polynomial needs to be modified. The versatile RS decoder with minimum distance 21 has been synthesized in the Xilinx Virtex-Ⅱ series field programmable gate array (FPGA) xe2v1000-5 and is used by coneatenated coding system for satellite communication. Results show that the maximum data processing rate can be up to 1.3 Gbit/s.展开更多
Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximat...Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.展开更多
Fault-tolerant error-correction(FTEC)circuit is the foundation for achieving reliable quantum computation and remote communication.However,designing a fault-tolerant error correction scheme with a solid error-correcti...Fault-tolerant error-correction(FTEC)circuit is the foundation for achieving reliable quantum computation and remote communication.However,designing a fault-tolerant error correction scheme with a solid error-correction ability and low overhead remains a significant challenge.In this paper,a low-overhead fault-tolerant error correction scheme is proposed for quantum communication systems.Firstly,syndrome ancillas are prepared into Bell states to detect errors caused by channel noise.We propose a detection approach that reduces the propagation path of quantum gate fault and reduces the circuit depth by splitting the stabilizer generator into X-type and Z-type.Additionally,a syndrome extraction circuit is equipped with two flag qubits to detect quantum gate faults,which may also introduce errors into the code block during the error detection process.Finally,analytical results are provided to demonstrate the fault-tolerant performance of the proposed FTEC scheme with the lower overhead of the ancillary qubits and circuit depth.展开更多
In this work, the homomorphism of the classic linear block code in linear network coding for the case of binary field and its extensions is studied. It is proved that the classic linear error-control block code is hom...In this work, the homomorphism of the classic linear block code in linear network coding for the case of binary field and its extensions is studied. It is proved that the classic linear error-control block code is homomorphic network error-control code in network coding. That is, if the source packets at the source node for a linear network coding are precoded using a linear block code, then every packet flowing in the network regarding to the source satisfies the same constraints as the source. As a consequence, error detection and correction can be performed at every intermediate nodes of multicast flow, rather than only at the destination node in the conventional way, which can help to identify and correct errors timely at the error-corrupted link and save the cost of forwarding error-corrupted data to the destination node when the intermediate nodes are ignorant of the errors. In addition, three examples are demonstrated which show that homomorphic linear code can be combined with homomorphic signature, McEliece public-key cryptosystem and unequal error protection respectively and thus have a great potential of practical utility.展开更多
The definition of good codes for error-detection is given. It is proved that a (n, k) linear block code in GF(q) are the good code for error-detection, if and only if its dual code is also. A series of new results abo...The definition of good codes for error-detection is given. It is proved that a (n, k) linear block code in GF(q) are the good code for error-detection, if and only if its dual code is also. A series of new results about the good codes for error-detection are derived. New lower bounds for undetected error probabilities are obtained, which are relative to n and k only, and not the weight structure of the codes.展开更多
Single event upsets(SEUs) induced by heavy ions were observed in 65 nm SRAMs to quantitatively evaluate the applicability and effectiveness of single-bit error correcting code(ECC) utilizing Hamming Code.The results s...Single event upsets(SEUs) induced by heavy ions were observed in 65 nm SRAMs to quantitatively evaluate the applicability and effectiveness of single-bit error correcting code(ECC) utilizing Hamming Code.The results show that the ECC did improve the performance dramatically,with the SEU cross sections of SRAMs with ECC being at the order of 10^(-11) cm^2/bit,two orders of magnitude higher than that without ECC(at the order of 10^(-9) cm^2/bit).Also,ineffectiveness of ECC module,including 1-,2- and 3-bits errors in single word(not Multiple Bit Upsets),was detected.The ECC modules in SRAMs utilizing(12,8) Hamming code would lose work when 2-bits upset accumulates in one codeword.Finally,the probabilities of failure modes involving 1-,2- and 3-bits errors,were calcaulated at 39.39%,37.88%and 22.73%,respectively,which agree well with the experimental results.展开更多
This paper proved the statement that a good linear block encoder is in fact a good local-random sequence generator. Furthermore, this statement discovers the deep relationship between the error-correcting coding theor...This paper proved the statement that a good linear block encoder is in fact a good local-random sequence generator. Furthermore, this statement discovers the deep relationship between the error-correcting coding theory and the modern cryptography.展开更多
This paper proposes a steady-state errors correction(SSEC)method for eliminating measurement errors.This method is based on the detections of error signal E(s)and output C(s)which generate an expected output R(s).In c...This paper proposes a steady-state errors correction(SSEC)method for eliminating measurement errors.This method is based on the detections of error signal E(s)and output C(s)which generate an expected output R(s).In comparison with the conventional solutions which are based on detecting the expected output R(s)and output C(s)to obtain error signal E(s),the measurement errors are eliminated even the error might be at a significant level.Moreover,it is possible that the individual debugging by regulating the coefficient K for every member of the multiple objectives achieves the optimization of the open loop gain.Therefore,this simple method can be applied to the weak coupling and multiple objectives system,which is usually controlled by complex controller.The principle of eliminating measurement errors is derived analytically,and the advantages comparing with the conventional solutions are depicted.Based on the SSEC method analysis,an application of this method for an active power filter(APF)is investigated and the effectiveness and viability of the scheme are demonstrated through the simulation and experimental verifications.展开更多
The decoding algorithm for the correction of errors of arbitrary Mannheim weight has discussed for Lattice constellations and codes from quadratic number fields.Following these lines,the decoding algorithms for the co...The decoding algorithm for the correction of errors of arbitrary Mannheim weight has discussed for Lattice constellations and codes from quadratic number fields.Following these lines,the decoding algorithms for the correction of errors of n=p−12 length cyclic codes(C)over quaternion integers of Quaternion Mannheim(QM)weight one up to two coordinates have considered.In continuation,the case of cyclic codes of lengths n=p−12 and 2n−1=p−2 has studied to improve the error correction efficiency.In this study,we present the decoding of cyclic codes of length n=ϕ(p)=p−1 and length 2n−1=2ϕ(p)−1=2p−3(where p is prime integer andϕis Euler phi function)over Hamilton Quaternion integers of Quaternion Mannheim weight for the correction of errors.Furthermore,the error correction capability and code rate tradeoff of these codes are also discussed.Thus,an increase in the length of the cyclic code is achieved along with its better code rate and an adequate error correction capability.展开更多
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金supported in part by the National Key Research and Development Program of China under Grant 2024YFE0200600in part by the National Natural Science Foundation of China under Grant 62071425+3 种基金in part by the Zhejiang Key Research and Development Plan under Grant 2022C01093in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR23F010005in part by the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01601in part by the Big Data and Intelligent Computing Key Lab of CQUPT under Grant BDIC-2023-B-001.
文摘Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.
基金Subject supported by the National Natural Science Fund of China
文摘A joint signature,encryption and error correction public-key cryptosystem is pre-sented based on an NP-completeness problem-the decoding problem of general linear codes inalgebraic coding theory,
文摘This work investigates the performance of various forward error correction codes, by which the MIMO-OFDM system is deployed. To ensure fair investigation, the performance of four modulations, namely, binary phase shift keying(BPSK), quadrature phase shift keying(QPSK), quadrature amplitude modulation(QAM)-16 and QAM-64 with four error correction codes(convolutional code(CC), Reed-Solomon code(RSC)+CC, low density parity check(LDPC)+CC, Turbo+CC) is studied under three channel models(additive white Guassian noise(AWGN), Rayleigh, Rician) and three different antenna configurations(2×2, 2×4, 4×4). The bit error rate(BER) and the peak signal to noise ratio(PSNR) are taken as the measures of performance. The binary data and the color image data are transmitted and the graphs are plotted for various modulations with different channels and error correction codes. Analysis on the performance measures confirm that the Turbo + CC code in 4×4 configurations exhibits better performance.
文摘In this paper, we present two new algorithms in residue number systems for scaling and error correction. The first algorithm is the Cyclic Property of Residue-Digit Difference (CPRDD). It is used to speed up the residue multiple error correction due to its parallel processes. The second is called the Target Race Distance (TRD). It is used to speed up residue scaling. Both of these two algorithms are used without the need for Mixed Radix Conversion (MRC) or Chinese Residue Theorem (CRT) techniques, which are time consuming and require hardware complexity. Furthermore, the residue scaling can be performed in parallel for any combination of moduli set members without using lookup tables.
文摘This paper demonstrates how channel coding can improve the robustness of spatial image watermarks against signal distortion caused by lossy data compression such as the JPEG scheme by taking advantage of the properties of Gray code. Two error-correction coding (ECC) schemes are used here: One scheme, referred to as the vertical ECC (VECC), is to encode information bits in a pixel by error-correction coding where the Gray code is used to improve the performance. The other scheme, referred to as the horizontal ECC (HECC), is to encode information bits in an image plane. In watermarking, HECC generates a codeword representing watermark bits, and each bit of the codeword is encoded by VECC. Simple single-error-correcting block codes are used in VECC and HECC. Several experiments of these schemes were conducted on test images. The result demonstrates that the error-correcting performance of HECC just depends on that of VECC, and accordingly, HECC enhances the capability of VECC. Consequently, HECC with appropriate codes can achieve stronger robustness to JPEG—caused distortions than non-channel-coding watermarking schemes.
文摘In this paper, error-correction coding (ECC) in Gray codes is considered and its performance in the protecting of spatial image watermarks against lossy data compression is demonstrated. For this purpose, the differences between bit patterns of two Gray codewords are analyzed in detail. On the basis of the properties, a method for encoding watermark bits in the Gray codewords that represent signal levels by a single-error-correcting (SEC) code is developed, which is referred to as the Gray-ECC method in this paper. The two codewords of the SEC code corresponding to respective watermark bits are determined so as to minimize the expected amount of distortion caused by the watermark embedding. The stochastic analyses show that an error-correcting capacity of the Gray-ECC method is superior to that of the ECC in natural binary codes for changes in signal codewords. Experiments of the Gray-ECC method were conducted on 8-bit monochrome images to evaluate both the features of watermarked images and the performance of robustness for image distortion resulting from the JPEG DCT-baseline coding scheme. The results demonstrate that, compared with a conventional averaging-based method, the Gray-ECC method yields watermarked images with less amount of signal distortion and also makes the watermark comparably robust for lossy data compression.
基金the National Natural Science Foundation of China(Grant Nos.11975132 and 61772295)the Natural Science Foundation of Shandong Province,China(Grant No.ZR2019YQ01)the Project of Shandong Province Higher Educational Science and Technology Program,China(Grant No.J18KZ012).
文摘Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum error correction,we need to find a fast and close to the optimal threshold decoder.In this work,we build a convolutional neural network(CNN)decoder to correct errors in the toric code based on the system research of machine learning.We analyze and optimize various conditions that affect CNN,and use the RestNet network architecture to reduce the running time.It is shortened by 30%-40%,and we finally design an optimized algorithm for CNN decoder.In this way,the threshold accuracy of the neural network decoder is made to reach 10.8%,which is closer to the optimal threshold of about 11%.The previous threshold of 8.9%-10.3%has been slightly improved,and there is no need to verify the basic noise.
基金Supported by the National Science and Technology Major Project of China(No.2018ZX10734401-004)
文摘In the Wyner-Ziv(WZ) video coding paradigm, a virtual correlation channel is assumed between the quantized source and the side information(SI) at the decoder, and channel coding is applied to achieve compression. In this paper, errors caused by the virtual correlation channel are addressed and an error concealment approach is proposed for pixel-based WZ video coding. In the approach, errors after decoding are classified into two types. Type 1 errors are caused by residual bit errors after channel decoding, while type 2 errors are due to low quality of SI in part of a frame which causes SI not lying within the quantization bin of a decoded quantized pixel value. Two separate strategies are respectively designed to detect and conceal the two types of errors. Simulations are carried out and results are presented to demonstrate the effectiveness of the proposed approach.
基金supported in part by the Education Department of Sichuan Province(Grant No.[2022]114).
文摘Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.
基金Sponsored by the Ministerial Level Advanced Research Foundation (20304)
文摘A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the weighted coefficient of the Chien search method is calculated sequentially through the three pipelined stages of the decoder. And therefore, the computation of the errata locator polynomial and errata evaluator polynomial needs to be modified. The versatile RS decoder with minimum distance 21 has been synthesized in the Xilinx Virtex-Ⅱ series field programmable gate array (FPGA) xe2v1000-5 and is used by coneatenated coding system for satellite communication. Results show that the maximum data processing rate can be up to 1.3 Gbit/s.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61671087 and 61962009)the Fundamental Research Funds for the Central Universities,China(Grant No.2019XD-A02)+1 种基金Huawei Technologies Co.Ltd(Grant No.YBN2020085019)the Open Foundation of Guizhou Provincial Key Laboratory of Public Big Data(Grant No.2018BDKFJJ018)。
文摘Fault-tolerant error-correction(FTEC)circuit is the foundation for achieving reliable quantum computation and remote communication.However,designing a fault-tolerant error correction scheme with a solid error-correction ability and low overhead remains a significant challenge.In this paper,a low-overhead fault-tolerant error correction scheme is proposed for quantum communication systems.Firstly,syndrome ancillas are prepared into Bell states to detect errors caused by channel noise.We propose a detection approach that reduces the propagation path of quantum gate fault and reduces the circuit depth by splitting the stabilizer generator into X-type and Z-type.Additionally,a syndrome extraction circuit is equipped with two flag qubits to detect quantum gate faults,which may also introduce errors into the code block during the error detection process.Finally,analytical results are provided to demonstrate the fault-tolerant performance of the proposed FTEC scheme with the lower overhead of the ancillary qubits and circuit depth.
基金supported by Natural Science Foundation of China (No.61271258)
文摘In this work, the homomorphism of the classic linear block code in linear network coding for the case of binary field and its extensions is studied. It is proved that the classic linear error-control block code is homomorphic network error-control code in network coding. That is, if the source packets at the source node for a linear network coding are precoded using a linear block code, then every packet flowing in the network regarding to the source satisfies the same constraints as the source. As a consequence, error detection and correction can be performed at every intermediate nodes of multicast flow, rather than only at the destination node in the conventional way, which can help to identify and correct errors timely at the error-corrupted link and save the cost of forwarding error-corrupted data to the destination node when the intermediate nodes are ignorant of the errors. In addition, three examples are demonstrated which show that homomorphic linear code can be combined with homomorphic signature, McEliece public-key cryptosystem and unequal error protection respectively and thus have a great potential of practical utility.
文摘The definition of good codes for error-detection is given. It is proved that a (n, k) linear block code in GF(q) are the good code for error-detection, if and only if its dual code is also. A series of new results about the good codes for error-detection are derived. New lower bounds for undetected error probabilities are obtained, which are relative to n and k only, and not the weight structure of the codes.
基金Supported by the National Natural Science Foundation of China(Nos.11079045 and 11179003)the Important Direction Project of the CAS Knowledge Innovation Program(No.KJCX2-YW-N27)
文摘Single event upsets(SEUs) induced by heavy ions were observed in 65 nm SRAMs to quantitatively evaluate the applicability and effectiveness of single-bit error correcting code(ECC) utilizing Hamming Code.The results show that the ECC did improve the performance dramatically,with the SEU cross sections of SRAMs with ECC being at the order of 10^(-11) cm^2/bit,two orders of magnitude higher than that without ECC(at the order of 10^(-9) cm^2/bit).Also,ineffectiveness of ECC module,including 1-,2- and 3-bits errors in single word(not Multiple Bit Upsets),was detected.The ECC modules in SRAMs utilizing(12,8) Hamming code would lose work when 2-bits upset accumulates in one codeword.Finally,the probabilities of failure modes involving 1-,2- and 3-bits errors,were calcaulated at 39.39%,37.88%and 22.73%,respectively,which agree well with the experimental results.
基金Supported by Trans-century Training Program Foundation for the Talents by the State Education Commission
文摘This paper proved the statement that a good linear block encoder is in fact a good local-random sequence generator. Furthermore, this statement discovers the deep relationship between the error-correcting coding theory and the modern cryptography.
基金National Natural Science Foundation of China(No.61273172)
文摘This paper proposes a steady-state errors correction(SSEC)method for eliminating measurement errors.This method is based on the detections of error signal E(s)and output C(s)which generate an expected output R(s).In comparison with the conventional solutions which are based on detecting the expected output R(s)and output C(s)to obtain error signal E(s),the measurement errors are eliminated even the error might be at a significant level.Moreover,it is possible that the individual debugging by regulating the coefficient K for every member of the multiple objectives achieves the optimization of the open loop gain.Therefore,this simple method can be applied to the weak coupling and multiple objectives system,which is usually controlled by complex controller.The principle of eliminating measurement errors is derived analytically,and the advantages comparing with the conventional solutions are depicted.Based on the SSEC method analysis,an application of this method for an active power filter(APF)is investigated and the effectiveness and viability of the scheme are demonstrated through the simulation and experimental verifications.
基金The authors extend their gratitude to the Deanship of Scientific Research at King Khalid University for funding this work through research groups program under grant number R.G.P.1/85/42.
文摘The decoding algorithm for the correction of errors of arbitrary Mannheim weight has discussed for Lattice constellations and codes from quadratic number fields.Following these lines,the decoding algorithms for the correction of errors of n=p−12 length cyclic codes(C)over quaternion integers of Quaternion Mannheim(QM)weight one up to two coordinates have considered.In continuation,the case of cyclic codes of lengths n=p−12 and 2n−1=p−2 has studied to improve the error correction efficiency.In this study,we present the decoding of cyclic codes of length n=ϕ(p)=p−1 and length 2n−1=2ϕ(p)−1=2p−3(where p is prime integer andϕis Euler phi function)over Hamilton Quaternion integers of Quaternion Mannheim weight for the correction of errors.Furthermore,the error correction capability and code rate tradeoff of these codes are also discussed.Thus,an increase in the length of the cyclic code is achieved along with its better code rate and an adequate error correction capability.