This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
The paper review the public-key cryptosystems based on the error correcting codes such as Goppa code, BCH code, RS code, rank distance code, algebraic geometric code as well as LDPC code, and made the comparative anal...The paper review the public-key cryptosystems based on the error correcting codes such as Goppa code, BCH code, RS code, rank distance code, algebraic geometric code as well as LDPC code, and made the comparative analyses of the merits and drawbacks of them. The cryptosystem based on Goppa code has high security, but can be achieved poor. The cryptosystems based on other error correcting codes have higher performance than Goppa code. But there are still some disadvantages to solve. At last, the paper produce an assumption of the Niederreiter cascade combination cryptosystem based on double public-keys under complex circumstances, which has higher performance and security than the traditional cryptosystems.展开更多
When tubules regularly arranged are welded onto a bobbin by robot, the position and orientation of some tubules may be changed by such factors as thermal deformations and positioning errors etc. Which make it very dif...When tubules regularly arranged are welded onto a bobbin by robot, the position and orientation of some tubules may be changed by such factors as thermal deformations and positioning errors etc. Which make it very difficult to weld automatically and continuously by the method of teaching and playing. In this paper, a kind of error measuring system is presented. By which the position and orientation errors of tubules relative to the teaching one can be measured. And, a method to correct the locus errors is also proposed, by which the moving locus planned via teaching points can be corrected in real time according to measured error parameters. So that, just by teaching one, all tubules on a bobbin could be welded automatically.展开更多
In this article, we study the ability of error-correcting quantum codes to increase the fidelity of quantum states throughout a quantum computation. We analyze arbitrary quantum codes that encode all qubits involved i...In this article, we study the ability of error-correcting quantum codes to increase the fidelity of quantum states throughout a quantum computation. We analyze arbitrary quantum codes that encode all qubits involved in the computation, and we study the evolution of n-qubit fidelity from the end of one application of the correcting circuit to the end of the next application. We assume that the correcting circuit does not introduce new errors, that it does not increase the execution time (i.e. its application takes zero seconds) and that quantum errors are isotropic. We show that the quantum code increases the fidelity of the states perturbed by quantum errors but that this improvement is not enough to justify the use of quantum codes. Namely, we prove that, taking into account that the time interval between the application of the two corrections is multiplied (at least) by the number of qubits n (due to the coding), the best option is not to use quantum codes, since the fidelity of the uncoded state over a time interval n times smaller is greater than that of the state resulting from the quantum code correction.展开更多
A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the we...A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the weighted coefficient of the Chien search method is calculated sequentially through the three pipelined stages of the decoder. And therefore, the computation of the errata locator polynomial and errata evaluator polynomial needs to be modified. The versatile RS decoder with minimum distance 21 has been synthesized in the Xilinx Virtex-Ⅱ series field programmable gate array (FPGA) xe2v1000-5 and is used by coneatenated coding system for satellite communication. Results show that the maximum data processing rate can be up to 1.3 Gbit/s.展开更多
Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it i...Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it is hard to achieve this limit since noises are inclined to destroy quantum coherence and entanglement.In this paper,a combined control scheme with feedback and quantum error correction(QEC)is proposed to achieve the Heisenberg limit in the presence of spontaneous emission,where the feedback control is used to protect a stabilizer code space containing an optimal probe state and an additional control is applied to eliminate the measurement incompatibility among three parameters.Although an ancilla system is necessary for the preparation of the optimal probe state,our scheme does not require the ancilla system to be noiseless.In addition,the control scheme in this paper has a low-dimensional code space.For the three components of a magnetic field,it can achieve the highest estimation precision with only a 2-dimensional code space,while at least a4-dimensional code space is required in the common optimal error correction protocols.展开更多
Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximat...Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.展开更多
Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can b...Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can be problematic at low SNRs and in high interference situations, as detecting and decoding techniques may not perform correctly in such circumstances. In addition, conventional error correction algorithms have limitations in their ability to correct errors in ADS-B messages, as the bit and confidence values may be declared inaccurately in the event of low SNRs and high interference. The principal goal of this paper is to deploy a Long Short-Term Memory (LSTM) recurrent neural network model for error correction in conjunction with a conventional algorithm. The data of various flights are collected and cleaned in an initial stage. The clean data is divided randomly into training and test sets. Next, the LSTM model is trained based on the training dataset, and then the model is evaluated based on the test dataset. The proposed model not only improves the ADS-B In packet error correction rate (PECR), but it also enhances the ADS-B In terms of sensitivity. The performance evaluation results reveal that the proposed scheme is achievable and efficient for the avionics industry. It is worth noting that the proposed algorithm is not dependent on conventional algorithms’ prerequisites.展开更多
Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finit...Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford + T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical op-erator comes from the Eastin-Knill theorem, which prevents a continuous sig-nal from being both fault-tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme condi-tions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The ap-proach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15, 1, 3] code, or codes with a larger distance such as the [11, 1, 5] code.展开更多
This study presents a proposed method for assessing the condition and predicting the future status of condensers operating in seawater over an extended period.The aim is to address the problems of scaling and corrosio...This study presents a proposed method for assessing the condition and predicting the future status of condensers operating in seawater over an extended period.The aim is to address the problems of scaling and corrosion,which lead to increased loss of cold resources.The method involves utilising a set of multivariate feature parameters associated with the condenser as input for evaluation and trend prediction.This methodology offers a precise means of determining the optimal timing for condenser cleaning,with the ultimate goal of improving its overall performance.The proposed approach involves the integration of the analytic network process with subjective expert experience and the entropy weightmethod with objective big data analysis to develop a fusion health degreemodel.The mathematical model is constructed quantitatively using the improved Mahalanobis distance.Furthermore,a comprehensive prediction model is developed by integrating the improved Informer model and Markov error correction.This model takes into account the health status of the equipment and several influencing factors,includingmultivariate feature characteristics.This model facilitates the objective examination and prediction of the progression of equipment deterioration trends.The present study involves the computation and verification of the field time series data,which serves to demonstrate the accuracy of the condenser health-related models proposed in this research.These models effectively depict the real condition and temporal variations of the equipment,thus offering a valuable method for determining the precise cleaning time required for the condenser.展开更多
Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum co...Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum computer. For this new topological stabilizer code-XYZ^(2) code defined on the cellular lattice, it is implemented on a hexagonal lattice of qubits and it encodes the logical qubits with the help of stabilizer measurements of weight six and weight two. However topological stabilizer codes in cellular lattice quantum systems suffer from the detrimental effects of noise due to interaction with the environment. Several decoding approaches have been proposed to address this problem. Here, we propose the use of a state-attention based reinforcement learning decoder to decode XYZ^(2) codes, which enables the decoder to more accurately focus on the information related to the current decoding position, and the error correction accuracy of our reinforcement learning decoder model under the optimisation conditions can reach 83.27% under the depolarizing noise model, and we have measured thresholds of 0.18856 and 0.19043 for XYZ^(2) codes at code spacing of 3–7 and 7–11, respectively. our study provides directions and ideas for applications of decoding schemes combining reinforcement learning attention mechanisms to other topological quantum error-correcting codes.展开更多
Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error corre...Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error correction using neural network-based machine learning methods is a promising approach that is adapted to physical systems without the need to build noise models.In this paper,we use a distributed decoding strategy,which effectively alleviates the problem of exponential growth of the training set required for neural networks as the code distance of quantum error-correcting codes increases.Our decoding algorithm is based on renormalization group decoding and recurrent neural network decoder.The recurrent neural network is trained through the ResNet architecture to improve its decoding accuracy.Then we test the decoding performance of our distributed strategy decoder,recurrent neural network decoder,and the classic minimum weight perfect matching(MWPM)decoder for rotated surface codes with different code distances under the circuit noise model,the thresholds of these three decoders are about 0.0052,0.0051,and 0.0049,respectively.Our results demonstrate that the distributed strategy decoder outperforms the other two decoders,achieving approximately a 5%improvement in decoding efficiency compared to the MWPM decoder and approximately a 2%improvement compared to the recurrent neural network decoder.展开更多
With the development of ultra-wide coverage technology,multibeam echo-sounder(MBES)system has put forward higher requirements for localization accuracy and computational efficiency of ray tracing method.The classical ...With the development of ultra-wide coverage technology,multibeam echo-sounder(MBES)system has put forward higher requirements for localization accuracy and computational efficiency of ray tracing method.The classical equivalent sound speed profile(ESSP)method replaces the measured sound velocity profile(SVP)with a simple constant gradient SVP,reducing the computational workload of beam positioning.However,in deep-sea environment,the depth measurement error of this method rapidly increases from the central beam to the edge beam.By analyzing the positioning error of the ESSP method at edge beam,it is discovered that the positioning error increases monotonically with the incident angle,and the relationship between them could be expressed by polynomial function.Therefore,an error correction algorithm based on polynomial fitting is obtained.The simulation experiment conducted on an inclined seafloor shows that the proposed algorithm exhibits comparable efficiency to the original ESSP method,while significantly improving bathymetry accuracy by nearly eight times in the edge beam.展开更多
This study investigated the impact of China’s monetary policy on both the money market and stock markets,assuming that non-policy variables would not respond contemporaneously to changes in policy variables.Monetary ...This study investigated the impact of China’s monetary policy on both the money market and stock markets,assuming that non-policy variables would not respond contemporaneously to changes in policy variables.Monetary policy adjustments are swiftly observed in money markets and gradually extend to the stock market.The study examined the effects of monetary policy shocks using three primary instruments:interest rate policy,reserve requirement ratio,and open market operations.Monthly data from 2007 to 2013 were analyzed using vector error correction(VEC)models.The findings suggest a likely presence of long-lasting and stable relationships among monetary policy,the money market,and stock markets.This research holds practical implications for Chinese policymakers,particularly in managing the challenges associated with fluctuation risks linked to high foreign exchange reserves,aiming to achieve autonomy in monetary policy and formulate effective monetary strategies to stimulate economic growth.展开更多
In this study, a method of analogue-based correction of errors(ACE) was introduced to improve El Ni?o-Southern Oscillation(ENSO) prediction produced by climate models. The ACE method is based on the hypothesis that th...In this study, a method of analogue-based correction of errors(ACE) was introduced to improve El Ni?o-Southern Oscillation(ENSO) prediction produced by climate models. The ACE method is based on the hypothesis that the flow-dependent model prediction errors are to some degree similar under analogous historical climate states, and so the historical errors can be used to effectively reduce such flow-dependent errors. With this method, the unknown errors in current ENSO predictions can be empirically estimated by using the known prediction errors which are diagnosed by the same model based on historical analogue states. The authors first propose the basic idea for applying the ACE method to ENSO prediction and then establish an analogue-dynamical ENSO prediction system based on an operational climate prediction model. The authors present some experimental results which clearly show the possibility of correcting the flow-dependent errors in ENSO prediction, and thus the potential of applying the ACE method to operational ENSO prediction based on climate models.展开更多
Measurement-based quantum computation with continuous variables,which realizes computation by performing measurement and feedforward of measurement results on a large scale Gaussian cluster state,provides a feasible w...Measurement-based quantum computation with continuous variables,which realizes computation by performing measurement and feedforward of measurement results on a large scale Gaussian cluster state,provides a feasible way to implement quantum computation.Quantum error correction is an essential procedure to protect quantum information in quantum computation and quantum communication.In this review,we briefly introduce the progress of measurement-based quantum computation and quantum error correction with continuous variables based on Gaussian cluster states.We also discuss the challenges in the fault-tolerant measurement-based quantum computation with continuous variables.展开更多
Magnetic field gradient tensor measurement is an important technique to obtain position information of magnetic objects. When using magnetic field sensors to measure magnetic field gradient as the coefficients of tens...Magnetic field gradient tensor measurement is an important technique to obtain position information of magnetic objects. When using magnetic field sensors to measure magnetic field gradient as the coefficients of tensor, field differentiation is generally approximated by field difference. As a result, magnetic objects positioning by magnetic field gradient tensor measurement always involves an inherent error caused by sensor sizes, leading to a reduction in detectable distance and detectable angle. In this paper, the inherent positioning error caused by magnetic field gradient tensor measurement is calculated and corrected by iterations based on the systematic position error distribution patterns. The results show that, the detectable distance range and the angle range of an ac magnetic object(2.44 Am^2@1 kHz) can be increased from(0.45 m, 0.75 m),(0?, 25?) to(0.30 m, 0.80 m),(0?,80?), respectively.展开更多
Longley-Rice channel model modifies the atmospheric refraction by the equivalent earth radius method, which is simple calculation but is not accurate. As it only uses the horizontal difference, but does not make use o...Longley-Rice channel model modifies the atmospheric refraction by the equivalent earth radius method, which is simple calculation but is not accurate. As it only uses the horizontal difference, but does not make use of the vertical section information, it does not agree with the actual propagation path. The atmospheric refraction error correction method of the Longley-Rice channel model has been improved. The improved method makes use of the vertical section information sufficiently and maps the distance between the receiver and transmitter to the radio wave propagation distance, It can exactly reflect the infection of propagation distance for the radio wave propagation loss. It is predicted to be more close to the experimental results by simulation in comparison with the measured data. The effectiveness of improved methods is proved by simulation.展开更多
In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can eff...In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model, Furthermore, in the ACE, the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors. The results of daily, decad and monthly prediction experiments on a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction, but is also better than that of the T63 model.展开更多
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金Supported by the Postgraduate Project of Military Science of PLA(2013JY431)55th Batch of China Postdoctoral Second-Class on Fund Projects(2014M552656)
文摘The paper review the public-key cryptosystems based on the error correcting codes such as Goppa code, BCH code, RS code, rank distance code, algebraic geometric code as well as LDPC code, and made the comparative analyses of the merits and drawbacks of them. The cryptosystem based on Goppa code has high security, but can be achieved poor. The cryptosystems based on other error correcting codes have higher performance than Goppa code. But there are still some disadvantages to solve. At last, the paper produce an assumption of the Niederreiter cascade combination cryptosystem based on double public-keys under complex circumstances, which has higher performance and security than the traditional cryptosystems.
文摘When tubules regularly arranged are welded onto a bobbin by robot, the position and orientation of some tubules may be changed by such factors as thermal deformations and positioning errors etc. Which make it very difficult to weld automatically and continuously by the method of teaching and playing. In this paper, a kind of error measuring system is presented. By which the position and orientation errors of tubules relative to the teaching one can be measured. And, a method to correct the locus errors is also proposed, by which the moving locus planned via teaching points can be corrected in real time according to measured error parameters. So that, just by teaching one, all tubules on a bobbin could be welded automatically.
文摘In this article, we study the ability of error-correcting quantum codes to increase the fidelity of quantum states throughout a quantum computation. We analyze arbitrary quantum codes that encode all qubits involved in the computation, and we study the evolution of n-qubit fidelity from the end of one application of the correcting circuit to the end of the next application. We assume that the correcting circuit does not introduce new errors, that it does not increase the execution time (i.e. its application takes zero seconds) and that quantum errors are isotropic. We show that the quantum code increases the fidelity of the states perturbed by quantum errors but that this improvement is not enough to justify the use of quantum codes. Namely, we prove that, taking into account that the time interval between the application of the two corrections is multiplied (at least) by the number of qubits n (due to the coding), the best option is not to use quantum codes, since the fidelity of the uncoded state over a time interval n times smaller is greater than that of the state resulting from the quantum code correction.
基金Sponsored by the Ministerial Level Advanced Research Foundation (20304)
文摘A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the weighted coefficient of the Chien search method is calculated sequentially through the three pipelined stages of the decoder. And therefore, the computation of the errata locator polynomial and errata evaluator polynomial needs to be modified. The versatile RS decoder with minimum distance 21 has been synthesized in the Xilinx Virtex-Ⅱ series field programmable gate array (FPGA) xe2v1000-5 and is used by coneatenated coding system for satellite communication. Results show that the maximum data processing rate can be up to 1.3 Gbit/s.
基金Project supported by the National Natural Science Foundation of China(Grant No.61873251)。
文摘Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it is hard to achieve this limit since noises are inclined to destroy quantum coherence and entanglement.In this paper,a combined control scheme with feedback and quantum error correction(QEC)is proposed to achieve the Heisenberg limit in the presence of spontaneous emission,where the feedback control is used to protect a stabilizer code space containing an optimal probe state and an additional control is applied to eliminate the measurement incompatibility among three parameters.Although an ancilla system is necessary for the preparation of the optimal probe state,our scheme does not require the ancilla system to be noiseless.In addition,the control scheme in this paper has a low-dimensional code space.For the three components of a magnetic field,it can achieve the highest estimation precision with only a 2-dimensional code space,while at least a4-dimensional code space is required in the common optimal error correction protocols.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.
文摘Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can be problematic at low SNRs and in high interference situations, as detecting and decoding techniques may not perform correctly in such circumstances. In addition, conventional error correction algorithms have limitations in their ability to correct errors in ADS-B messages, as the bit and confidence values may be declared inaccurately in the event of low SNRs and high interference. The principal goal of this paper is to deploy a Long Short-Term Memory (LSTM) recurrent neural network model for error correction in conjunction with a conventional algorithm. The data of various flights are collected and cleaned in an initial stage. The clean data is divided randomly into training and test sets. Next, the LSTM model is trained based on the training dataset, and then the model is evaluated based on the test dataset. The proposed model not only improves the ADS-B In packet error correction rate (PECR), but it also enhances the ADS-B In terms of sensitivity. The performance evaluation results reveal that the proposed scheme is achievable and efficient for the avionics industry. It is worth noting that the proposed algorithm is not dependent on conventional algorithms’ prerequisites.
文摘Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford + T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical op-erator comes from the Eastin-Knill theorem, which prevents a continuous sig-nal from being both fault-tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme condi-tions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The ap-proach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15, 1, 3] code, or codes with a larger distance such as the [11, 1, 5] code.
基金supported by the National Natural Science Foundation of China (51906133).
文摘This study presents a proposed method for assessing the condition and predicting the future status of condensers operating in seawater over an extended period.The aim is to address the problems of scaling and corrosion,which lead to increased loss of cold resources.The method involves utilising a set of multivariate feature parameters associated with the condenser as input for evaluation and trend prediction.This methodology offers a precise means of determining the optimal timing for condenser cleaning,with the ultimate goal of improving its overall performance.The proposed approach involves the integration of the analytic network process with subjective expert experience and the entropy weightmethod with objective big data analysis to develop a fusion health degreemodel.The mathematical model is constructed quantitatively using the improved Mahalanobis distance.Furthermore,a comprehensive prediction model is developed by integrating the improved Informer model and Markov error correction.This model takes into account the health status of the equipment and several influencing factors,includingmultivariate feature characteristics.This model facilitates the objective examination and prediction of the progression of equipment deterioration trends.The present study involves the computation and verification of the field time series data,which serves to demonstrate the accuracy of the condenser health-related models proposed in this research.These models effectively depict the real condition and temporal variations of the equipment,thus offering a valuable method for determining the precise cleaning time required for the condenser.
基金supported by the Natural Science Foundation of Shandong Province,China (Grant No. ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos. ZR2022LLZ012 and ZR2021LLZ001)。
文摘Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum computer. For this new topological stabilizer code-XYZ^(2) code defined on the cellular lattice, it is implemented on a hexagonal lattice of qubits and it encodes the logical qubits with the help of stabilizer measurements of weight six and weight two. However topological stabilizer codes in cellular lattice quantum systems suffer from the detrimental effects of noise due to interaction with the environment. Several decoding approaches have been proposed to address this problem. Here, we propose the use of a state-attention based reinforcement learning decoder to decode XYZ^(2) codes, which enables the decoder to more accurately focus on the information related to the current decoding position, and the error correction accuracy of our reinforcement learning decoder model under the optimisation conditions can reach 83.27% under the depolarizing noise model, and we have measured thresholds of 0.18856 and 0.19043 for XYZ^(2) codes at code spacing of 3–7 and 7–11, respectively. our study provides directions and ideas for applications of decoding schemes combining reinforcement learning attention mechanisms to other topological quantum error-correcting codes.
基金Project supported by Natural Science Foundation of Shandong Province,China (Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error correction using neural network-based machine learning methods is a promising approach that is adapted to physical systems without the need to build noise models.In this paper,we use a distributed decoding strategy,which effectively alleviates the problem of exponential growth of the training set required for neural networks as the code distance of quantum error-correcting codes increases.Our decoding algorithm is based on renormalization group decoding and recurrent neural network decoder.The recurrent neural network is trained through the ResNet architecture to improve its decoding accuracy.Then we test the decoding performance of our distributed strategy decoder,recurrent neural network decoder,and the classic minimum weight perfect matching(MWPM)decoder for rotated surface codes with different code distances under the circuit noise model,the thresholds of these three decoders are about 0.0052,0.0051,and 0.0049,respectively.Our results demonstrate that the distributed strategy decoder outperforms the other two decoders,achieving approximately a 5%improvement in decoding efficiency compared to the MWPM decoder and approximately a 2%improvement compared to the recurrent neural network decoder.
基金The Natural Science Foundation of Shandong Province of China under contract Nos ZR2022MA051 and ZR2020MA090the National Natural Science Foundation of China under contract No.U22A2012+2 种基金China Postdoctoral Science Foundation under contract No.2020M670891the SDUST Research Fund under contract No.2019TDJH103the Talent Introduction Plan for Youth Innovation Team in universities of Shandong Province(innovation team of satellite positioning and navigation)。
文摘With the development of ultra-wide coverage technology,multibeam echo-sounder(MBES)system has put forward higher requirements for localization accuracy and computational efficiency of ray tracing method.The classical equivalent sound speed profile(ESSP)method replaces the measured sound velocity profile(SVP)with a simple constant gradient SVP,reducing the computational workload of beam positioning.However,in deep-sea environment,the depth measurement error of this method rapidly increases from the central beam to the edge beam.By analyzing the positioning error of the ESSP method at edge beam,it is discovered that the positioning error increases monotonically with the incident angle,and the relationship between them could be expressed by polynomial function.Therefore,an error correction algorithm based on polynomial fitting is obtained.The simulation experiment conducted on an inclined seafloor shows that the proposed algorithm exhibits comparable efficiency to the original ESSP method,while significantly improving bathymetry accuracy by nearly eight times in the edge beam.
文摘This study investigated the impact of China’s monetary policy on both the money market and stock markets,assuming that non-policy variables would not respond contemporaneously to changes in policy variables.Monetary policy adjustments are swiftly observed in money markets and gradually extend to the stock market.The study examined the effects of monetary policy shocks using three primary instruments:interest rate policy,reserve requirement ratio,and open market operations.Monthly data from 2007 to 2013 were analyzed using vector error correction(VEC)models.The findings suggest a likely presence of long-lasting and stable relationships among monetary policy,the money market,and stock markets.This research holds practical implications for Chinese policymakers,particularly in managing the challenges associated with fluctuation risks linked to high foreign exchange reserves,aiming to achieve autonomy in monetary policy and formulate effective monetary strategies to stimulate economic growth.
基金supported by the Integration and Application Project for Key Meteorology Techniques in China Meteorological Administration (Grant No. CMAGJ2014M64)the China Meteorological Special Project (Grant No. GYHY2012 06016)the National Basic Research Program of China (973 Program, Grant No. 2010CB950404)
文摘In this study, a method of analogue-based correction of errors(ACE) was introduced to improve El Ni?o-Southern Oscillation(ENSO) prediction produced by climate models. The ACE method is based on the hypothesis that the flow-dependent model prediction errors are to some degree similar under analogous historical climate states, and so the historical errors can be used to effectively reduce such flow-dependent errors. With this method, the unknown errors in current ENSO predictions can be empirically estimated by using the known prediction errors which are diagnosed by the same model based on historical analogue states. The authors first propose the basic idea for applying the ACE method to ENSO prediction and then establish an analogue-dynamical ENSO prediction system based on an operational climate prediction model. The authors present some experimental results which clearly show the possibility of correcting the flow-dependent errors in ENSO prediction, and thus the potential of applying the ACE method to operational ENSO prediction based on climate models.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.11834010,11804001,and 11904160)the Natural Science Foundation of Anhui Province,China(Grant No.1808085QA11)+1 种基金the Program of Youth Sanjin Scholar,National Key R&D Program of China(Grant No.2016YFA0301402)the Fund for Shanxi"1331 Project"Key Subjects Construction.
文摘Measurement-based quantum computation with continuous variables,which realizes computation by performing measurement and feedforward of measurement results on a large scale Gaussian cluster state,provides a feasible way to implement quantum computation.Quantum error correction is an essential procedure to protect quantum information in quantum computation and quantum communication.In this review,we briefly introduce the progress of measurement-based quantum computation and quantum error correction with continuous variables based on Gaussian cluster states.We also discuss the challenges in the fault-tolerant measurement-based quantum computation with continuous variables.
基金supported by the National Natural Science Foundation of China(61473023)
文摘Magnetic field gradient tensor measurement is an important technique to obtain position information of magnetic objects. When using magnetic field sensors to measure magnetic field gradient as the coefficients of tensor, field differentiation is generally approximated by field difference. As a result, magnetic objects positioning by magnetic field gradient tensor measurement always involves an inherent error caused by sensor sizes, leading to a reduction in detectable distance and detectable angle. In this paper, the inherent positioning error caused by magnetic field gradient tensor measurement is calculated and corrected by iterations based on the systematic position error distribution patterns. The results show that, the detectable distance range and the angle range of an ac magnetic object(2.44 Am^2@1 kHz) can be increased from(0.45 m, 0.75 m),(0?, 25?) to(0.30 m, 0.80 m),(0?,80?), respectively.
文摘Longley-Rice channel model modifies the atmospheric refraction by the equivalent earth radius method, which is simple calculation but is not accurate. As it only uses the horizontal difference, but does not make use of the vertical section information, it does not agree with the actual propagation path. The atmospheric refraction error correction method of the Longley-Rice channel model has been improved. The improved method makes use of the vertical section information sufficiently and maps the distance between the receiver and transmitter to the radio wave propagation distance, It can exactly reflect the infection of propagation distance for the radio wave propagation loss. It is predicted to be more close to the experimental results by simulation in comparison with the measured data. The effectiveness of improved methods is proved by simulation.
基金Project supported by the National Natural Science Foundation of China (Grant Nos 40575036 and 40325015).Acknowledgement The authors thank Drs Zhang Pei-Qun and Bao Ming very much for their valuable comments on the present paper.
文摘In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model, Furthermore, in the ACE, the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors. The results of daily, decad and monthly prediction experiments on a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction, but is also better than that of the T63 model.