This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the we...A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the weighted coefficient of the Chien search method is calculated sequentially through the three pipelined stages of the decoder. And therefore, the computation of the errata locator polynomial and errata evaluator polynomial needs to be modified. The versatile RS decoder with minimum distance 21 has been synthesized in the Xilinx Virtex-Ⅱ series field programmable gate array (FPGA) xe2v1000-5 and is used by coneatenated coding system for satellite communication. Results show that the maximum data processing rate can be up to 1.3 Gbit/s.展开更多
In this article, we study the ability of error-correcting quantum codes to increase the fidelity of quantum states throughout a quantum computation. We analyze arbitrary quantum codes that encode all qubits involved i...In this article, we study the ability of error-correcting quantum codes to increase the fidelity of quantum states throughout a quantum computation. We analyze arbitrary quantum codes that encode all qubits involved in the computation, and we study the evolution of n-qubit fidelity from the end of one application of the correcting circuit to the end of the next application. We assume that the correcting circuit does not introduce new errors, that it does not increase the execution time (i.e. its application takes zero seconds) and that quantum errors are isotropic. We show that the quantum code increases the fidelity of the states perturbed by quantum errors but that this improvement is not enough to justify the use of quantum codes. Namely, we prove that, taking into account that the time interval between the application of the two corrections is multiplied (at least) by the number of qubits n (due to the coding), the best option is not to use quantum codes, since the fidelity of the uncoded state over a time interval n times smaller is greater than that of the state resulting from the quantum code correction.展开更多
Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptab...Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.展开更多
The decoding algorithm for the correction of errors of arbitrary Mannheim weight has discussed for Lattice constellations and codes from quadratic number fields.Following these lines,the decoding algorithms for the co...The decoding algorithm for the correction of errors of arbitrary Mannheim weight has discussed for Lattice constellations and codes from quadratic number fields.Following these lines,the decoding algorithms for the correction of errors of n=p−12 length cyclic codes(C)over quaternion integers of Quaternion Mannheim(QM)weight one up to two coordinates have considered.In continuation,the case of cyclic codes of lengths n=p−12 and 2n−1=p−2 has studied to improve the error correction efficiency.In this study,we present the decoding of cyclic codes of length n=ϕ(p)=p−1 and length 2n−1=2ϕ(p)−1=2p−3(where p is prime integer andϕis Euler phi function)over Hamilton Quaternion integers of Quaternion Mannheim weight for the correction of errors.Furthermore,the error correction capability and code rate tradeoff of these codes are also discussed.Thus,an increase in the length of the cyclic code is achieved along with its better code rate and an adequate error correction capability.展开更多
In this paper we present an efficient algorithm to decode linear block codes on binary channels. The main idea consists in using a vote procedure in order to elaborate artificial reliabilities of the binary received w...In this paper we present an efficient algorithm to decode linear block codes on binary channels. The main idea consists in using a vote procedure in order to elaborate artificial reliabilities of the binary received word and to present the obtained real vector r as inputs of a SIHO decoder (Soft In/Hard Out). The goal of the latter is to try to find the closest codeword to r in terms of the Euclidean distance. A comparison of the proposed algorithm over the AWGN channel with the Majority logic decoder, Berlekamp-Massey, Bit Flipping, Hartman-Rudolf algorithms and others show that it is more efficient in terms of performance. The complexity of the proposed decoder depends on the weight of the error to decode, on the code structure and also on the used SIHO decoder.展开更多
Entanglement-assisted quantum error correction codes(EAQECCs)play an important role in quantum communications with noise.Such a scheme can use arbitrary classical linear code to transmit qubits over noisy quantum chan...Entanglement-assisted quantum error correction codes(EAQECCs)play an important role in quantum communications with noise.Such a scheme can use arbitrary classical linear code to transmit qubits over noisy quantum channels by consuming some ebits between the sender(Alice)and the receiver(Bob).It is usually assumed that the preshared ebits of Bob are error free.However,noise on these ebits is unavoidable in many cases.In this work,we evaluate the performance of EAQECCs with noisy ebits over asymmetric quantum channels and quantum channels with memory by computing the exact entanglement fidelity of several EAQECCs.We consider asymmetric errors in both qubits and ebits and show that the performance of EAQECCs in entanglement fidelity gets improved for qubits and ebits over asymmetric channels.In quantum memory channels,we compute the entanglement fidelity of several EAQECCs over Markovian quantum memory channels and show that the performance of EAQECCs is lowered down by the channel memory.Furthermore,we show that the performance of EAQECCs is diverse when the error probabilities of qubits and ebits are different.In both asymmetric and memory quantum channels,we show that the performance of EAQECCs is improved largely when the error probability of ebits is reasonably smaller than that of qubits.展开更多
In signal processing and communication systems,digital filters are widely employed.In some circumstances,the reliability of those systems is crucial,necessitating the use of fault tolerant filter implementations.Many ...In signal processing and communication systems,digital filters are widely employed.In some circumstances,the reliability of those systems is crucial,necessitating the use of fault tolerant filter implementations.Many strategies have been presented throughout the years to achieve fault tolerance by utilising the structure and properties of the filters.As technology advances,more complicated systems with several filters become possible.Some of the filters in those complicated systems frequently function in parallel,for example,by applying the same filter to various input signals.Recently,a simple strategy for achieving fault tolerance that takes advantage of the availability of parallel filters was given.Many fault-tolerant ways that take advantage of the filter’s structure and properties have been proposed throughout the years.The primary idea is to use structured authentication scan chains to study the internal states of finite impulse response(FIR)components in order to detect and recover the exact state of faulty modules through the state of non-faulty modules.Finally,a simple solution of Double modular redundancy(DMR)based fault tolerance was developed that takes advantage of the availability of parallel filters for image denoising.This approach is expanded in this short to display how parallel filters can be protected using error correction codes(ECCs)in which each filter is comparable to a bit in a standard ECC.“Advanced error recovery for parallel systems,”the suggested technique,can find and eliminate hidden defects in FIR modules,and also restore the system from multiple failures impacting two FIR modules.From the implementation,Xilinx ISE 14.7 was found to have given significant error reduction capability in the fault calculations and reduction in the area which reduces the cost of implementation.Faults were introduced in all the outputs of the functional filters and found that the fault in every output is corrected.展开更多
In order to improve the transmission rate of the compression system,a real-time video lossy compression system based on multiple ADV212 is proposed and achieved. Considering the CMOS video format and the working princ...In order to improve the transmission rate of the compression system,a real-time video lossy compression system based on multiple ADV212 is proposed and achieved. Considering the CMOS video format and the working principle of ADV212,a Custom-specific mode is used for various video formats firstly. The data can be cached through the FPGA internal RAM and SDRAM Ping-Pong operation. And the working efficiency is greatly promoted. Secondly,this method can realize direct code stream transmission or do it after storage. Through the error correcting coding,the correction ability of the flash memory is highly improved. Lastly,the compression and de-compression circuit boards are involved to specify the performance of the method. The results show that the compression system has a real-time and stable performance. And the compression ratio can be changed arbitrarily by configuring the program. The compression system can be realized and the real-time performance is good with large amount of data.展开更多
A dual double interlocked storage cell(DICE)interleaving layout static random-access memory(SRAM)is designed and manufactured based on 65 nm bulk complementary metal oxide semiconductor technology.The single event ups...A dual double interlocked storage cell(DICE)interleaving layout static random-access memory(SRAM)is designed and manufactured based on 65 nm bulk complementary metal oxide semiconductor technology.The single event upset(SEU)cross sections of this memory are obtained via heavy ion irradiation with a linear energy transfer(LET)value ranging from 1.7 to 83.4 MeV/(mg/cm^(2)).Experimental results show that the upset threshold(LETth)of a 4 KB block is approximately 6 MeV/(mg/cm^(2)),which is much better than that of a standard unhardened SRAM with an identical technology node.A 1 KB block has a higher LETth of 25 MeV/(mg/cm^(2))owing to the use of the error detection and correction(EDAC)code.For a Ta ion irradiation test with the highest LET value(83.4 MeV/(mg/cm^(2))),the benefit of the EDAC code is reduced significantly because the multi-bit upset proportion in the SEU is increased remarkably.Compared with normal incident ions,the memory exhibits a higher SEU sensitivity in the tilt angle irradiation test.Moreover,the SEU cross section indicates a significant dependence on the data pattern.When comprehensively considering HSPICE simulation results and the sensitive area distributions of the DICE cell,it is shown that the data pattern dependence is primarily associated with the arrangement of sensitive transistor pairs in the layout.Finally,some suggestions are provided to further improve the radiation resistance of the memory.By implementing a particular design at the layout level,the SEU tolerance of the memory is improved significantly at a low area cost.Therefore,the designed 65 nm SRAM is suitable for electronic systems operating in serious radiation environments.展开更多
The paper review the public-key cryptosystems based on the error correcting codes such as Goppa code, BCH code, RS code, rank distance code, algebraic geometric code as well as LDPC code, and made the comparative anal...The paper review the public-key cryptosystems based on the error correcting codes such as Goppa code, BCH code, RS code, rank distance code, algebraic geometric code as well as LDPC code, and made the comparative analyses of the merits and drawbacks of them. The cryptosystem based on Goppa code has high security, but can be achieved poor. The cryptosystems based on other error correcting codes have higher performance than Goppa code. But there are still some disadvantages to solve. At last, the paper produce an assumption of the Niederreiter cascade combination cryptosystem based on double public-keys under complex circumstances, which has higher performance and security than the traditional cryptosystems.展开更多
In deep sub-micron ICs, growing amounts of on-die memory and scaling effects make embedded memories more vulnerable to reliability problems, such as soft errors induced by radiation. Error Correction Code(ECC) along w...In deep sub-micron ICs, growing amounts of on-die memory and scaling effects make embedded memories more vulnerable to reliability problems, such as soft errors induced by radiation. Error Correction Code(ECC) along with scrubbing is an efficient method for protecting memories against these errors. However, the latency of coding circuits brings speed penalties in high performance applications. This paper proposed a "bit bypassing" ECC protected memory by buffering the encoded data and adding an identifying address for the input data. The proposed memory design has been fabricated on a 130 nm CMOS process. According to the measurement, the proposed scheme only gives the minimum delay overhead of 22.6%, compared with other corresponding memories. Furthermore, heavy ion testing demonstrated the single event effects performance of the proposed memory achieves error rate reductions by 42.9 to 63.3 times.展开更多
With the support from the National Natural Science Foundation of China,Prof.Huang Yanyi(黄岩谊)led a team at Peking University to demonstrate a novel approach,which combined fluorogenic sequencingby-synthesis(SBS)chem...With the support from the National Natural Science Foundation of China,Prof.Huang Yanyi(黄岩谊)led a team at Peking University to demonstrate a novel approach,which combined fluorogenic sequencingby-synthesis(SBS)chemistry with an information theory-based error-correction coding scheme to展开更多
The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved i...The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call 'Multiple Impulse Method (MIM)', where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code.展开更多
Low-Density Parity-Check (LDPC) codes are powerful error correcting codes adopted by recent communication standards. LDPC decoders are based on belief propagation algorithms, which make use of a Tanner graph and ver...Low-Density Parity-Check (LDPC) codes are powerful error correcting codes adopted by recent communication standards. LDPC decoders are based on belief propagation algorithms, which make use of a Tanner graph and very intensive message-passing computation, and usually require hardware-based dedicated solutions. With the exponential increase of the computational power of commodity graphics processing units (GPUs), new opportunities have arisen to develop general purpose processing on GPUs. This paper proposes the use of GPUs for implementing flexible and programmable LDPC decoders. A new stream-based approach is proposed, based on compact data structures to represent the Tanner graph. It is shown that such a challenging application for stream-based computing, because of irregular memory access patterns, memory bandwidth and recursive flow control constraints, can be efficiently implemented on GPUs. The proposal was experimentally evaluated by programming LDPC decoders on GPUs using the Caravela platform, a generic interface tool for managing the kernels' execution regardless of the GPU manufacturer and operating system. Moreover, to relatively assess the obtained results, we have also implemented LDPC decoders on general purpose processors with Streaming Single Instruction Multiple Data (SIMD) Extensions. Experimental results show that the solution proposed here efficiently decodes several codewords simultaneously, reducing the processing time by one order of magnitude.展开更多
In this paper, we further study the connections between linear network error correction codes and representable matroids. We extend the concept of matroidal network introduced by Dougherty et al. to a generalized case...In this paper, we further study the connections between linear network error correction codes and representable matroids. We extend the concept of matroidal network introduced by Dougherty et al. to a generalized case when errors occur in multi- ple channels. Importantly, we show the necessary and sufficient conditions on the existence of linear network error correction mul- ticast/broadcast/dispersion maximum distance separable (MDS) code on a matroidal error correction network.展开更多
By extending the notion of the minimum distance for linear network error correction code(LNEC), this paper introduces the concept of generalized minimum rank distance(GMRD) of variable-rate linear network error correc...By extending the notion of the minimum distance for linear network error correction code(LNEC), this paper introduces the concept of generalized minimum rank distance(GMRD) of variable-rate linear network error correction codes. The basic properties of GMRD are investigated. It is proved that GMRD can characterize the error correction/detection capability of variable-rate linear network error correction codes when the source transmits the messages at several different rates.展开更多
Three-party password authenticated key exchange (3PAKE) protocol plays a significant role in the history of secure communication area in which two clients agree a robust session key in an authentic manner based on pas...Three-party password authenticated key exchange (3PAKE) protocol plays a significant role in the history of secure communication area in which two clients agree a robust session key in an authentic manner based on passwords. In recent years, researchers focused on developing simple 3PAKE (S-3PAKE) protocol to gain system e?ciency while preserving security robustness for the system. In this study, we first demonstrate how an undetectable on-line dictionary attack can be successfully applied over three existing S-3PAKE schemes. An error correction code (ECC) based S-3PAKE protocol is then introduced to eliminate the identified authentication weakness.展开更多
With continuous technology scaling, on-chip structures are becoming more and more susceptible to soft errors. Architectural vulnerability factor (AVF) has been introduced to quantify the architectural vulnerability ...With continuous technology scaling, on-chip structures are becoming more and more susceptible to soft errors. Architectural vulnerability factor (AVF) has been introduced to quantify the architectural vulnerability of on-chip structures to soft errors. Recent studies have found that designing soft error protection techniques with the awareness of AVF is greatly helpful to achieve a tradeoff between performance and reliability for several structures (i.e., issue queue, reorder buffer). Cache is one of the most susceptible components to soft errors and is commonly protected with error correcting codes (ECC). However, protecting caches closer to the processor (i.e., L1 data cache (LID)) using ECC could result in high overhead. Protecting caches without accurate knowledge of the vulnerability characteristics may lead to over-protection. Therefore, designing AVF-aware ECC is attractive for designers to balance among performance, power and reliability for cache, especially at early design stage. In this paper, we improve the methodology of cache AVF computation and develop a new AVF estimation framework, soft error reliability analysis based on SimpleScalar. Then we characterize dynamic vulnerability behavior of LID and detect the correlations between L1D AVF and various performance metrics. We propose to employ Bayesian additive regression trees to accurately model the variation of L1D AVF and to quantitatively explain the important effects of several key performance metrics on L1D AVF. Then, we employ bump hunting technique to reduce the complexity of L1D AVF prediction and extract some simple selecting rules based on several key performance metrics, thus enabling a simplified and fast estimation of L1D AVF. Based on the simplified and fast estimation of L1D AVF, intervals of high L1D AVF can be identified online, enabling us to develop the AVF-aware ECC technique to reduce the overhead of ECC. Experimental results show that compared with traditional ECC technique which provides complete ECC protection throughout the entire lifetime of a program, AVF-aware ECC technique reduces the L1D access latency by 35% and saves power consumption by 14% for SPEC2K benchmarks averagely.展开更多
We consider the problem of characterizing network capacity in the presence of adversarial errors on network links,focusing in particular on the effect of small downstream links,where the downstream link is the directe...We consider the problem of characterizing network capacity in the presence of adversarial errors on network links,focusing in particular on the effect of small downstream links,where the downstream link is the directed link of feedback links across the cut of network.In this paper,we present a family of zigzag networks where the inner bound and the outer bound coincide.We also establish tight condition for this family of zig-zag network,and develop encoding scheme and detection and decoding strategy.展开更多
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金Sponsored by the Ministerial Level Advanced Research Foundation (20304)
文摘A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the weighted coefficient of the Chien search method is calculated sequentially through the three pipelined stages of the decoder. And therefore, the computation of the errata locator polynomial and errata evaluator polynomial needs to be modified. The versatile RS decoder with minimum distance 21 has been synthesized in the Xilinx Virtex-Ⅱ series field programmable gate array (FPGA) xe2v1000-5 and is used by coneatenated coding system for satellite communication. Results show that the maximum data processing rate can be up to 1.3 Gbit/s.
文摘In this article, we study the ability of error-correcting quantum codes to increase the fidelity of quantum states throughout a quantum computation. We analyze arbitrary quantum codes that encode all qubits involved in the computation, and we study the evolution of n-qubit fidelity from the end of one application of the correcting circuit to the end of the next application. We assume that the correcting circuit does not introduce new errors, that it does not increase the execution time (i.e. its application takes zero seconds) and that quantum errors are isotropic. We show that the quantum code increases the fidelity of the states perturbed by quantum errors but that this improvement is not enough to justify the use of quantum codes. Namely, we prove that, taking into account that the time interval between the application of the two corrections is multiplied (at least) by the number of qubits n (due to the coding), the best option is not to use quantum codes, since the fidelity of the uncoded state over a time interval n times smaller is greater than that of the state resulting from the quantum code correction.
基金supported in part by the Education Department of Sichuan Province(Grant No.[2022]114).
文摘Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.
基金The authors extend their gratitude to the Deanship of Scientific Research at King Khalid University for funding this work through research groups program under grant number R.G.P.1/85/42.
文摘The decoding algorithm for the correction of errors of arbitrary Mannheim weight has discussed for Lattice constellations and codes from quadratic number fields.Following these lines,the decoding algorithms for the correction of errors of n=p−12 length cyclic codes(C)over quaternion integers of Quaternion Mannheim(QM)weight one up to two coordinates have considered.In continuation,the case of cyclic codes of lengths n=p−12 and 2n−1=p−2 has studied to improve the error correction efficiency.In this study,we present the decoding of cyclic codes of length n=ϕ(p)=p−1 and length 2n−1=2ϕ(p)−1=2p−3(where p is prime integer andϕis Euler phi function)over Hamilton Quaternion integers of Quaternion Mannheim weight for the correction of errors.Furthermore,the error correction capability and code rate tradeoff of these codes are also discussed.Thus,an increase in the length of the cyclic code is achieved along with its better code rate and an adequate error correction capability.
文摘In this paper we present an efficient algorithm to decode linear block codes on binary channels. The main idea consists in using a vote procedure in order to elaborate artificial reliabilities of the binary received word and to present the obtained real vector r as inputs of a SIHO decoder (Soft In/Hard Out). The goal of the latter is to try to find the closest codeword to r in terms of the Euclidean distance. A comparison of the proposed algorithm over the AWGN channel with the Majority logic decoder, Berlekamp-Massey, Bit Flipping, Hartman-Rudolf algorithms and others show that it is more efficient in terms of performance. The complexity of the proposed decoder depends on the weight of the error to decode, on the code structure and also on the used SIHO decoder.
基金Project supported by the National Key R&D Program of China (Grant No.2022YFB3103802)the National Natural Science Foundation of China (Grant Nos.62371240 and 61802175)the Fundamental Research Funds for the Central Universities (Grant No.30923011014)。
文摘Entanglement-assisted quantum error correction codes(EAQECCs)play an important role in quantum communications with noise.Such a scheme can use arbitrary classical linear code to transmit qubits over noisy quantum channels by consuming some ebits between the sender(Alice)and the receiver(Bob).It is usually assumed that the preshared ebits of Bob are error free.However,noise on these ebits is unavoidable in many cases.In this work,we evaluate the performance of EAQECCs with noisy ebits over asymmetric quantum channels and quantum channels with memory by computing the exact entanglement fidelity of several EAQECCs.We consider asymmetric errors in both qubits and ebits and show that the performance of EAQECCs in entanglement fidelity gets improved for qubits and ebits over asymmetric channels.In quantum memory channels,we compute the entanglement fidelity of several EAQECCs over Markovian quantum memory channels and show that the performance of EAQECCs is lowered down by the channel memory.Furthermore,we show that the performance of EAQECCs is diverse when the error probabilities of qubits and ebits are different.In both asymmetric and memory quantum channels,we show that the performance of EAQECCs is improved largely when the error probability of ebits is reasonably smaller than that of qubits.
文摘In signal processing and communication systems,digital filters are widely employed.In some circumstances,the reliability of those systems is crucial,necessitating the use of fault tolerant filter implementations.Many strategies have been presented throughout the years to achieve fault tolerance by utilising the structure and properties of the filters.As technology advances,more complicated systems with several filters become possible.Some of the filters in those complicated systems frequently function in parallel,for example,by applying the same filter to various input signals.Recently,a simple strategy for achieving fault tolerance that takes advantage of the availability of parallel filters was given.Many fault-tolerant ways that take advantage of the filter’s structure and properties have been proposed throughout the years.The primary idea is to use structured authentication scan chains to study the internal states of finite impulse response(FIR)components in order to detect and recover the exact state of faulty modules through the state of non-faulty modules.Finally,a simple solution of Double modular redundancy(DMR)based fault tolerance was developed that takes advantage of the availability of parallel filters for image denoising.This approach is expanded in this short to display how parallel filters can be protected using error correction codes(ECCs)in which each filter is comparable to a bit in a standard ECC.“Advanced error recovery for parallel systems,”the suggested technique,can find and eliminate hidden defects in FIR modules,and also restore the system from multiple failures impacting two FIR modules.From the implementation,Xilinx ISE 14.7 was found to have given significant error reduction capability in the fault calculations and reduction in the area which reduces the cost of implementation.Faults were introduced in all the outputs of the functional filters and found that the fault in every output is corrected.
基金Supported by the National High Technology Research and Development Programme of China(No.863-2-5-1-13B)
文摘In order to improve the transmission rate of the compression system,a real-time video lossy compression system based on multiple ADV212 is proposed and achieved. Considering the CMOS video format and the working principle of ADV212,a Custom-specific mode is used for various video formats firstly. The data can be cached through the FPGA internal RAM and SDRAM Ping-Pong operation. And the working efficiency is greatly promoted. Secondly,this method can realize direct code stream transmission or do it after storage. Through the error correcting coding,the correction ability of the flash memory is highly improved. Lastly,the compression and de-compression circuit boards are involved to specify the performance of the method. The results show that the compression system has a real-time and stable performance. And the compression ratio can be changed arbitrarily by configuring the program. The compression system can be realized and the real-time performance is good with large amount of data.
基金the National Natural Science Foundation of China(Nos.12035019,11690041,and 11805244).
文摘A dual double interlocked storage cell(DICE)interleaving layout static random-access memory(SRAM)is designed and manufactured based on 65 nm bulk complementary metal oxide semiconductor technology.The single event upset(SEU)cross sections of this memory are obtained via heavy ion irradiation with a linear energy transfer(LET)value ranging from 1.7 to 83.4 MeV/(mg/cm^(2)).Experimental results show that the upset threshold(LETth)of a 4 KB block is approximately 6 MeV/(mg/cm^(2)),which is much better than that of a standard unhardened SRAM with an identical technology node.A 1 KB block has a higher LETth of 25 MeV/(mg/cm^(2))owing to the use of the error detection and correction(EDAC)code.For a Ta ion irradiation test with the highest LET value(83.4 MeV/(mg/cm^(2))),the benefit of the EDAC code is reduced significantly because the multi-bit upset proportion in the SEU is increased remarkably.Compared with normal incident ions,the memory exhibits a higher SEU sensitivity in the tilt angle irradiation test.Moreover,the SEU cross section indicates a significant dependence on the data pattern.When comprehensively considering HSPICE simulation results and the sensitive area distributions of the DICE cell,it is shown that the data pattern dependence is primarily associated with the arrangement of sensitive transistor pairs in the layout.Finally,some suggestions are provided to further improve the radiation resistance of the memory.By implementing a particular design at the layout level,the SEU tolerance of the memory is improved significantly at a low area cost.Therefore,the designed 65 nm SRAM is suitable for electronic systems operating in serious radiation environments.
基金Supported by the Postgraduate Project of Military Science of PLA(2013JY431)55th Batch of China Postdoctoral Second-Class on Fund Projects(2014M552656)
文摘The paper review the public-key cryptosystems based on the error correcting codes such as Goppa code, BCH code, RS code, rank distance code, algebraic geometric code as well as LDPC code, and made the comparative analyses of the merits and drawbacks of them. The cryptosystem based on Goppa code has high security, but can be achieved poor. The cryptosystems based on other error correcting codes have higher performance than Goppa code. But there are still some disadvantages to solve. At last, the paper produce an assumption of the Niederreiter cascade combination cryptosystem based on double public-keys under complex circumstances, which has higher performance and security than the traditional cryptosystems.
基金Supported by the National Science and Technology Major Project of China(No.2013ZX03006004)
文摘In deep sub-micron ICs, growing amounts of on-die memory and scaling effects make embedded memories more vulnerable to reliability problems, such as soft errors induced by radiation. Error Correction Code(ECC) along with scrubbing is an efficient method for protecting memories against these errors. However, the latency of coding circuits brings speed penalties in high performance applications. This paper proposed a "bit bypassing" ECC protected memory by buffering the encoded data and adding an identifying address for the input data. The proposed memory design has been fabricated on a 130 nm CMOS process. According to the measurement, the proposed scheme only gives the minimum delay overhead of 22.6%, compared with other corresponding memories. Furthermore, heavy ion testing demonstrated the single event effects performance of the proposed memory achieves error rate reductions by 42.9 to 63.3 times.
文摘With the support from the National Natural Science Foundation of China,Prof.Huang Yanyi(黄岩谊)led a team at Peking University to demonstrate a novel approach,which combined fluorogenic sequencingby-synthesis(SBS)chemistry with an information theory-based error-correction coding scheme to
文摘The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call 'Multiple Impulse Method (MIM)', where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code.
基金Supported by the Portuguese Foundation for Science and Technology,through the FEDER program,and also under Grant No.SFRH/BD/37495/2007
文摘Low-Density Parity-Check (LDPC) codes are powerful error correcting codes adopted by recent communication standards. LDPC decoders are based on belief propagation algorithms, which make use of a Tanner graph and very intensive message-passing computation, and usually require hardware-based dedicated solutions. With the exponential increase of the computational power of commodity graphics processing units (GPUs), new opportunities have arisen to develop general purpose processing on GPUs. This paper proposes the use of GPUs for implementing flexible and programmable LDPC decoders. A new stream-based approach is proposed, based on compact data structures to represent the Tanner graph. It is shown that such a challenging application for stream-based computing, because of irregular memory access patterns, memory bandwidth and recursive flow control constraints, can be efficiently implemented on GPUs. The proposal was experimentally evaluated by programming LDPC decoders on GPUs using the Caravela platform, a generic interface tool for managing the kernels' execution regardless of the GPU manufacturer and operating system. Moreover, to relatively assess the obtained results, we have also implemented LDPC decoders on general purpose processors with Streaming Single Instruction Multiple Data (SIMD) Extensions. Experimental results show that the solution proposed here efficiently decodes several codewords simultaneously, reducing the processing time by one order of magnitude.
基金Supported by the National Natural Science Foundation of China(6127117461272492)
文摘In this paper, we further study the connections between linear network error correction codes and representable matroids. We extend the concept of matroidal network introduced by Dougherty et al. to a generalized case when errors occur in multi- ple channels. Importantly, we show the necessary and sufficient conditions on the existence of linear network error correction mul- ticast/broadcast/dispersion maximum distance separable (MDS) code on a matroidal error correction network.
文摘By extending the notion of the minimum distance for linear network error correction code(LNEC), this paper introduces the concept of generalized minimum rank distance(GMRD) of variable-rate linear network error correction codes. The basic properties of GMRD are investigated. It is proved that GMRD can characterize the error correction/detection capability of variable-rate linear network error correction codes when the source transmits the messages at several different rates.
基金the National Science Council (Nos. NSC 99-2218-E-011-014 and NSC 100-2219-E-011-002)
文摘Three-party password authenticated key exchange (3PAKE) protocol plays a significant role in the history of secure communication area in which two clients agree a robust session key in an authentic manner based on passwords. In recent years, researchers focused on developing simple 3PAKE (S-3PAKE) protocol to gain system e?ciency while preserving security robustness for the system. In this study, we first demonstrate how an undetectable on-line dictionary attack can be successfully applied over three existing S-3PAKE schemes. An error correction code (ECC) based S-3PAKE protocol is then introduced to eliminate the identified authentication weakness.
基金Supported by the National Natural Science Foundation of China under Grant Nos. 60970036 and 60873016the National High Technology Development 863 Program of China under Grant Nos. 2009AA01Z102 and 2009AA01Z124
文摘With continuous technology scaling, on-chip structures are becoming more and more susceptible to soft errors. Architectural vulnerability factor (AVF) has been introduced to quantify the architectural vulnerability of on-chip structures to soft errors. Recent studies have found that designing soft error protection techniques with the awareness of AVF is greatly helpful to achieve a tradeoff between performance and reliability for several structures (i.e., issue queue, reorder buffer). Cache is one of the most susceptible components to soft errors and is commonly protected with error correcting codes (ECC). However, protecting caches closer to the processor (i.e., L1 data cache (LID)) using ECC could result in high overhead. Protecting caches without accurate knowledge of the vulnerability characteristics may lead to over-protection. Therefore, designing AVF-aware ECC is attractive for designers to balance among performance, power and reliability for cache, especially at early design stage. In this paper, we improve the methodology of cache AVF computation and develop a new AVF estimation framework, soft error reliability analysis based on SimpleScalar. Then we characterize dynamic vulnerability behavior of LID and detect the correlations between L1D AVF and various performance metrics. We propose to employ Bayesian additive regression trees to accurately model the variation of L1D AVF and to quantitatively explain the important effects of several key performance metrics on L1D AVF. Then, we employ bump hunting technique to reduce the complexity of L1D AVF prediction and extract some simple selecting rules based on several key performance metrics, thus enabling a simplified and fast estimation of L1D AVF. Based on the simplified and fast estimation of L1D AVF, intervals of high L1D AVF can be identified online, enabling us to develop the AVF-aware ECC technique to reduce the overhead of ECC. Experimental results show that compared with traditional ECC technique which provides complete ECC protection throughout the entire lifetime of a program, AVF-aware ECC technique reduces the L1D access latency by 35% and saves power consumption by 14% for SPEC2K benchmarks averagely.
基金Supported by the National Natural Science Foundation of China(61271174,61301178)
文摘We consider the problem of characterizing network capacity in the presence of adversarial errors on network links,focusing in particular on the effect of small downstream links,where the downstream link is the directed link of feedback links across the cut of network.In this paper,we present a family of zigzag networks where the inner bound and the outer bound coincide.We also establish tight condition for this family of zig-zag network,and develop encoding scheme and detection and decoding strategy.