This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
In the original publication the third author name is published incorrectly as“Hayatdavoodi Masoud”.The correct author name should be read as“Masoud Hayatdavoodi”.The correct author name is available in this correc...In the original publication the third author name is published incorrectly as“Hayatdavoodi Masoud”.The correct author name should be read as“Masoud Hayatdavoodi”.The correct author name is available in this correction.展开更多
With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial informati...With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial information grow, electronic documents have also proliferated. When dealing with numerous electronic documents and texts written by Chinese beginners, manually written texts often contain hidden grammatical errors, posing a significant challenge to traditional manual proofreading. Correcting these grammatical errors is crucial to ensure fluency and readability. However, certain special types of text grammar or logical errors can have a huge impact, and manually proofreading a large number of texts individually is clearly impractical. Consequently, research on text error correction techniques has garnered significant attention in recent years. The advent and advancement of deep learning have paved the way for sequence-to-sequence learning methods to be extensively applied to the task of text error correction. This paper presents a comprehensive analysis of Chinese text grammar error correction technology, elaborates on its current research status, discusses existing problems, proposes preliminary solutions, and conducts experiments using judicial documents as an example. The aim is to provide a feasible research approach for Chinese text error correction technology.展开更多
·AIM: To review existing data for the prevalence of corrected, uncorrected, and inadequately corrected refractive errors and spectacle wear in Hungary.·METHODS: Data from two nationwide cross-sectional studi...·AIM: To review existing data for the prevalence of corrected, uncorrected, and inadequately corrected refractive errors and spectacle wear in Hungary.·METHODS: Data from two nationwide cross-sectional studies were analysed. The Rapid Assessment of Avoidable Blindness study collected population-based representative national data on the prevalence of visual impairment due to uncorrected refractive errors and spectacle coverage in 3523 people aged ≥50y(Group I). The Comprehensive Health Test Program of Hungary provided data on the use of spectacles in 80 290 people aged ≥18y(Group Ⅱ).·RESULTS: In Group I, almost half of the survey population showed refractive errors for distant vision, about 10% of which were uncorrected(3.2% of all male participants and 5.0% of females). The distance spectacle coverage was 90.7%(91.9% in males;90.2% in females). The proportion of inadequate distance spectacles was found to be 33.1%. Uncorrected presbyopia was found in 15.7% of participants. In all age groups(Group Ⅱ), 65.4% of females and 56.0% of males used distance spectacles,and approximately 28.9% of these spectacles were found to be inappropriate for dioptric power(with 0.5 dioptres or more). The prevalence of inaccurate distance spectacles was significantly higher in older age groups(71y and above) in both sexes.·CONCLUSION: According to this population-based data, uncorrected refractive errors are not rare in Hungary. Despite recent national initiatives, fur ther steps are required to reduce uncorrected refractive errors and associated negative effects on vision, such as avoidable visual impairment.展开更多
Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximat...Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.展开更多
Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it i...Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it is hard to achieve this limit since noises are inclined to destroy quantum coherence and entanglement.In this paper,a combined control scheme with feedback and quantum error correction(QEC)is proposed to achieve the Heisenberg limit in the presence of spontaneous emission,where the feedback control is used to protect a stabilizer code space containing an optimal probe state and an additional control is applied to eliminate the measurement incompatibility among three parameters.Although an ancilla system is necessary for the preparation of the optimal probe state,our scheme does not require the ancilla system to be noiseless.In addition,the control scheme in this paper has a low-dimensional code space.For the three components of a magnetic field,it can achieve the highest estimation precision with only a 2-dimensional code space,while at least a4-dimensional code space is required in the common optimal error correction protocols.展开更多
Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can b...Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can be problematic at low SNRs and in high interference situations, as detecting and decoding techniques may not perform correctly in such circumstances. In addition, conventional error correction algorithms have limitations in their ability to correct errors in ADS-B messages, as the bit and confidence values may be declared inaccurately in the event of low SNRs and high interference. The principal goal of this paper is to deploy a Long Short-Term Memory (LSTM) recurrent neural network model for error correction in conjunction with a conventional algorithm. The data of various flights are collected and cleaned in an initial stage. The clean data is divided randomly into training and test sets. Next, the LSTM model is trained based on the training dataset, and then the model is evaluated based on the test dataset. The proposed model not only improves the ADS-B In packet error correction rate (PECR), but it also enhances the ADS-B In terms of sensitivity. The performance evaluation results reveal that the proposed scheme is achievable and efficient for the avionics industry. It is worth noting that the proposed algorithm is not dependent on conventional algorithms’ prerequisites.展开更多
Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finit...Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford + T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical op-erator comes from the Eastin-Knill theorem, which prevents a continuous sig-nal from being both fault-tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme condi-tions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The ap-proach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15, 1, 3] code, or codes with a larger distance such as the [11, 1, 5] code.展开更多
AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was...AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.展开更多
In the version of this Article originally published online,there was an error in the schematics of Figures 2b and 2c.These errors have now been corrected in the original article.
Following publication of the original article[1],the authors reported an error in the last author’s name,it was mistakenly written as“Jun Den”.The correct author’s name“Jun Deng”has been updated in this Correction.
In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical...In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical precipitation(1982-2014)on the Qinghai-Tibetan Plateau was evaluated in this study.Results indicate that all models exhibit an overestimation of precipitation through the analysis of the Taylor index,temporal and spatial statistical parameters.To correct the overestimation,a fusion correction method combining the Backpropagation Neural Network Correction(BP)and Quantum Mapping(QM)correction,named BQ method,was proposed.With this method,the historical precipitation of each model was corrected in space and time,respectively.The correction results were then analyzed in time,space,and analysis of variance(ANOVA)with those corrected by the BP and QM methods,respectively.Finally,the fusion correction method results for each model were compared with the Climatic Research Unit(CRU)data for significance analysis to obtain the trends of precipitation increase and decrease for each model.The results show that the IPSL-CM6A-LR model is relatively good in simulating historical precipitation on the Qinghai-Tibetan Plateau(R=0.7,RSME=0.15)among the uncorrected data.In terms of time,the total precipitation corrected by the fusion method has the same interannual trend and the closest precipitation values to the CRU data;In terms of space,the annual average precipitation corrected by the fusion method has the smallest difference with the CRU data,and the total historical annual average precipitation is not significantly different from the CRU data,which is better than BP and QM.Therefore,the correction effect of the fusion method on the historical precipitation of each model is better than that of the QM and BP methods.The precipitation in the central and northeastern parts of the plateau shows a significant increasing trend.The correlation coefficients between monthly precipitation and site-detected precipitation for all models after BQ correction exceed 0.8.展开更多
Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracer...Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.展开更多
Correction to:Nano-Micro Letters(2024)16:112 https://doi.org/10.1007/s40820-024-01327-2 In the supplementary information the following corrections have been carried out:1.Institute of Energy and Climate Research,Mater...Correction to:Nano-Micro Letters(2024)16:112 https://doi.org/10.1007/s40820-024-01327-2 In the supplementary information the following corrections have been carried out:1.Institute of Energy and Climate Research,Materials Synthesis and Processing,Forschungszentrum Jülich GmbH,52425 Jülich,Germany.Corrected:Institute of Energy and Climate Research:Materials Synthesis and Processing(IEK-1),Forschungszentrum Jülich GmbH,52425 Jülich,Germany.展开更多
In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally a...In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.展开更多
In the article‘MicroRNA-329-3p inhibits the Wnt/β-catenin pathway and proliferation of osteosarcoma cells by targeting transcription factor 7-like 1’(Oncology Research,2024,Vol.32,No.3,pp.463−476.doi:10.32604/or.20...In the article‘MicroRNA-329-3p inhibits the Wnt/β-catenin pathway and proliferation of osteosarcoma cells by targeting transcription factor 7-like 1’(Oncology Research,2024,Vol.32,No.3,pp.463−476.doi:10.32604/or.2023.044085),there was an error in the compilation of Fig.8D.We have revised Fig.8D to correct this error.A corrected version of Fig.8 is provided.This correction does not change any results or conclusions of the article.We apologize for any inconvenience caused.展开更多
In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution de...In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.展开更多
Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with ...Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.展开更多
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
文摘In the original publication the third author name is published incorrectly as“Hayatdavoodi Masoud”.The correct author name should be read as“Masoud Hayatdavoodi”.The correct author name is available in this correction.
文摘With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial information grow, electronic documents have also proliferated. When dealing with numerous electronic documents and texts written by Chinese beginners, manually written texts often contain hidden grammatical errors, posing a significant challenge to traditional manual proofreading. Correcting these grammatical errors is crucial to ensure fluency and readability. However, certain special types of text grammar or logical errors can have a huge impact, and manually proofreading a large number of texts individually is clearly impractical. Consequently, research on text error correction techniques has garnered significant attention in recent years. The advent and advancement of deep learning have paved the way for sequence-to-sequence learning methods to be extensively applied to the task of text error correction. This paper presents a comprehensive analysis of Chinese text grammar error correction technology, elaborates on its current research status, discusses existing problems, proposes preliminary solutions, and conducts experiments using judicial documents as an example. The aim is to provide a feasible research approach for Chinese text error correction technology.
基金The RAAB survey was supported by the Lions Clubs International Foundation (LCIF) Sight First Research Grant Program (No.SF 1825/UND)。
文摘·AIM: To review existing data for the prevalence of corrected, uncorrected, and inadequately corrected refractive errors and spectacle wear in Hungary.·METHODS: Data from two nationwide cross-sectional studies were analysed. The Rapid Assessment of Avoidable Blindness study collected population-based representative national data on the prevalence of visual impairment due to uncorrected refractive errors and spectacle coverage in 3523 people aged ≥50y(Group I). The Comprehensive Health Test Program of Hungary provided data on the use of spectacles in 80 290 people aged ≥18y(Group Ⅱ).·RESULTS: In Group I, almost half of the survey population showed refractive errors for distant vision, about 10% of which were uncorrected(3.2% of all male participants and 5.0% of females). The distance spectacle coverage was 90.7%(91.9% in males;90.2% in females). The proportion of inadequate distance spectacles was found to be 33.1%. Uncorrected presbyopia was found in 15.7% of participants. In all age groups(Group Ⅱ), 65.4% of females and 56.0% of males used distance spectacles,and approximately 28.9% of these spectacles were found to be inappropriate for dioptric power(with 0.5 dioptres or more). The prevalence of inaccurate distance spectacles was significantly higher in older age groups(71y and above) in both sexes.·CONCLUSION: According to this population-based data, uncorrected refractive errors are not rare in Hungary. Despite recent national initiatives, fur ther steps are required to reduce uncorrected refractive errors and associated negative effects on vision, such as avoidable visual impairment.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.
基金Project supported by the National Natural Science Foundation of China(Grant No.61873251)。
文摘Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it is hard to achieve this limit since noises are inclined to destroy quantum coherence and entanglement.In this paper,a combined control scheme with feedback and quantum error correction(QEC)is proposed to achieve the Heisenberg limit in the presence of spontaneous emission,where the feedback control is used to protect a stabilizer code space containing an optimal probe state and an additional control is applied to eliminate the measurement incompatibility among three parameters.Although an ancilla system is necessary for the preparation of the optimal probe state,our scheme does not require the ancilla system to be noiseless.In addition,the control scheme in this paper has a low-dimensional code space.For the three components of a magnetic field,it can achieve the highest estimation precision with only a 2-dimensional code space,while at least a4-dimensional code space is required in the common optimal error correction protocols.
文摘Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can be problematic at low SNRs and in high interference situations, as detecting and decoding techniques may not perform correctly in such circumstances. In addition, conventional error correction algorithms have limitations in their ability to correct errors in ADS-B messages, as the bit and confidence values may be declared inaccurately in the event of low SNRs and high interference. The principal goal of this paper is to deploy a Long Short-Term Memory (LSTM) recurrent neural network model for error correction in conjunction with a conventional algorithm. The data of various flights are collected and cleaned in an initial stage. The clean data is divided randomly into training and test sets. Next, the LSTM model is trained based on the training dataset, and then the model is evaluated based on the test dataset. The proposed model not only improves the ADS-B In packet error correction rate (PECR), but it also enhances the ADS-B In terms of sensitivity. The performance evaluation results reveal that the proposed scheme is achievable and efficient for the avionics industry. It is worth noting that the proposed algorithm is not dependent on conventional algorithms’ prerequisites.
文摘Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford + T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical op-erator comes from the Eastin-Knill theorem, which prevents a continuous sig-nal from being both fault-tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme condi-tions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The ap-proach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15, 1, 3] code, or codes with a larger distance such as the [11, 1, 5] code.
文摘AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.
文摘In the version of this Article originally published online,there was an error in the schematics of Figures 2b and 2c.These errors have now been corrected in the original article.
文摘Following publication of the original article[1],the authors reported an error in the last author’s name,it was mistakenly written as“Jun Den”.The correct author’s name“Jun Deng”has been updated in this Correction.
文摘In order to obtain more accurate precipitation data and better simulate the precipitation on the Tibetan Plateau,the simulation capability of 14 Coupled Model Intercomparison Project Phase 6(CMIP6)models of historical precipitation(1982-2014)on the Qinghai-Tibetan Plateau was evaluated in this study.Results indicate that all models exhibit an overestimation of precipitation through the analysis of the Taylor index,temporal and spatial statistical parameters.To correct the overestimation,a fusion correction method combining the Backpropagation Neural Network Correction(BP)and Quantum Mapping(QM)correction,named BQ method,was proposed.With this method,the historical precipitation of each model was corrected in space and time,respectively.The correction results were then analyzed in time,space,and analysis of variance(ANOVA)with those corrected by the BP and QM methods,respectively.Finally,the fusion correction method results for each model were compared with the Climatic Research Unit(CRU)data for significance analysis to obtain the trends of precipitation increase and decrease for each model.The results show that the IPSL-CM6A-LR model is relatively good in simulating historical precipitation on the Qinghai-Tibetan Plateau(R=0.7,RSME=0.15)among the uncorrected data.In terms of time,the total precipitation corrected by the fusion method has the same interannual trend and the closest precipitation values to the CRU data;In terms of space,the annual average precipitation corrected by the fusion method has the smallest difference with the CRU data,and the total historical annual average precipitation is not significantly different from the CRU data,which is better than BP and QM.Therefore,the correction effect of the fusion method on the historical precipitation of each model is better than that of the QM and BP methods.The precipitation in the central and northeastern parts of the plateau shows a significant increasing trend.The correlation coefficients between monthly precipitation and site-detected precipitation for all models after BQ correction exceed 0.8.
基金Supported by Natural Science Foundation of Shaanxi Province of China(Grant No.2021JM010)Suzhou Municipal Natural Science Foundation of China(Grant Nos.SYG202018,SYG202134).
文摘Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.
文摘Correction to:Nano-Micro Letters(2024)16:112 https://doi.org/10.1007/s40820-024-01327-2 In the supplementary information the following corrections have been carried out:1.Institute of Energy and Climate Research,Materials Synthesis and Processing,Forschungszentrum Jülich GmbH,52425 Jülich,Germany.Corrected:Institute of Energy and Climate Research:Materials Synthesis and Processing(IEK-1),Forschungszentrum Jülich GmbH,52425 Jülich,Germany.
基金supported in part by the National Key R&D Program of China(2022YFC3401303)the Natural Science Foundation of Jiangsu Province(BK20211528)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KFCX22_2300)。
文摘In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.
文摘In the article‘MicroRNA-329-3p inhibits the Wnt/β-catenin pathway and proliferation of osteosarcoma cells by targeting transcription factor 7-like 1’(Oncology Research,2024,Vol.32,No.3,pp.463−476.doi:10.32604/or.2023.044085),there was an error in the compilation of Fig.8D.We have revised Fig.8D to correct this error.A corrected version of Fig.8 is provided.This correction does not change any results or conclusions of the article.We apologize for any inconvenience caused.
文摘In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.
基金supported by the Key Research and Development Program of Zhejiang Province,China(2023C03116).
文摘Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.