AIM:To investigate the influence of postoperative intraocular lens(IOL)positions on the accuracy of cataract surgery and examine the predictive factors of postoperative biometry prediction errors using the Barrett Uni...AIM:To investigate the influence of postoperative intraocular lens(IOL)positions on the accuracy of cataract surgery and examine the predictive factors of postoperative biometry prediction errors using the Barrett Universal II(BUII)IOL formula for calculation.METHODS:The prospective study included patients who had undergone cataract surgery performed by a single surgeon from June 2020 to April 2022.The collected data included the best-corrected visual acuity(BCVA),corneal curvature,preoperative and postoperative central anterior chamber depths(ACD),axial length(AXL),IOL power,and refractive error.BUII formula was used to calculate the IOL power.The mean absolute error(MAE)was calculated,and all the participants were divided into two groups accordingly.Independent t-tests were applied to compare the variables between groups.Logistic regression analysis was used to analyze the influence of age,AXL,corneal curvature,and preoperative and postoperative ACD on MAE.RESULTS:A total of 261 patients were enrolled.The 243(93.1%)and 18(6.9%)had postoperative MAE<1 and>1 D,respectively.The number of females was higher in patients with MAE>1 D(χ^(2)=3.833,P=0.039).The postoperative BCVA(logMAR)of patients with MAE>1 D was significantly worse(t=-2.448;P=0.025).After adjusting for gender in the logistic model,the risk of postoperative refractive errors was higher in patients with a shallow postoperative anterior chamber[odds ratio=0.346;95% confidence interval(CI):0.164,0.730,P=0.005].CONCLUSION:Risk factors for biometry prediction error after cataract surgery include the patient’s sex and postoperative ACD.Patients with a shallow postoperative anterior chamber are prone to have refractive errors.展开更多
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe...Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was...AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.展开更多
Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracer...Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.展开更多
In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally a...In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.展开更多
In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution de...In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.展开更多
In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibrati...In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.展开更多
Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irra...Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.展开更多
Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SM...Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SMS-WARR(Shanghai Meteorological Service-WRF ADAS Rapid Refresh System),are analyzed to quantitatively reveal the relationships between the forecasted surface wind speed errors and terrain features,with the intent of providing clues to better apply the NWP model to complex terrain regions.The terrain features are described by three parameters:the standard deviation of the model grid-scale orography,terrain height error of the model,and slope angle.The results show that the forecast bias has a unimodal distribution with a change in the standard deviation of orography.The minimum ME(the mean value of bias)is 1.2 m s^(-1) when the standard deviation is between 60 and 70 m.A positive correlation exists between bias and terrain height error,with the ME increasing by 10%−30%for every 200 m increase in terrain height error.The ME decreases by 65.6%when slope angle increases from(0.5°−1.5°)to larger than 3.5°for uphill winds but increases by 35.4%when the absolute value of slope angle increases from(0.5°−1.5°)to(2.5°−3.5°)for downhill winds.Several sensitivity experiments are carried out with a model output statistical(MOS)calibration model for surface wind speeds and ME(RMSE)has been reduced by 90%(30%)by introducing terrain parameters,demonstrating the value of this study.展开更多
In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of nor...In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of normalizing constants.It is shown that M_(n)^(p),when v=1,converges to the Frechet extreme value distribution at the rate of 1/n,and if v>1 then M_(n)^(p)converges to the Gumbel extreme value distribution at the rate of(loglogn)^(2)=(log n)^(1-1/v).展开更多
AIM:To compare relative peripheral refractive errors(RPREs)in Chinese children with and without myopic anisometropia(MAI)and to explore the relationship between RPRE and myopia.METHODS:This observational cross-section...AIM:To compare relative peripheral refractive errors(RPREs)in Chinese children with and without myopic anisometropia(MAI)and to explore the relationship between RPRE and myopia.METHODS:This observational cross-sectional study included 160 children divided into two groups according to the interocular spherical equivalent refraction(SER)difference≥1.0 D in the MAI group(n=80)and<1.0 D in the non-MAI group(n=80).The MAI group was further divided into two subgroups:ΔSER<2.0 D group and ΔSER≥2.0 D group.Basic ocular biometric parameters of axial length(AL),average keratometry(Ave K),cylinder(CYL),surface regularity index(SRI),and surface asymmetry index(SAI)were recorded.In addition,multispectral refraction topography was performed to measure RPRE,and the parameters were recorded as total refraction difference value(TRDV),refraction difference value(RDV)0-10,RDV10-20,RDV20-30,RDV30-40,RDV40-53,RDV-superior(RDV-S),RDV-inferior(RDV-I),RDV-temporal(RDV-T)and RDV-nasal(RDV-N).RESULTS:In the non-MAI group,the interocular differences of all parameters of RPRE were not significant.In the MAI group,the interocular differences of TRDV,RDV10-53,RDV-S,RDV-I,RDV-T,and RDV-N were significant.In subgroup analysis,the interocular differences of TRDV,RDV30-53,RDV-I,and RDV-T were significant in ΔSER<2.0 D group and ΔSER≥2.0 D group,but the interocular differences of RDV10-30,RDV-S and RDV-N were only significant in the ΔSER≥2.0 D group.In correlation analysis,ΔTRDV,ΔRDV 10-53,ΔRDV-S,and ΔRDV-N were negatively correlated with ΔSER but positively correlated with ΔAL.CONCLUSION:The more myopic eyes have larger hyperopic RPRE in Chinese children with MAI in certain retinal range,and partialΔRPRE is closely associated with ΔSER and ΔAL.展开更多
In this paper,the fixed-time time-varying formation of heterogeneous multi-agent systems(MASs) based on tracking error observer under denial-of-service(DoS) attacks is investigated.Firstly,the dynamic pinning strategy...In this paper,the fixed-time time-varying formation of heterogeneous multi-agent systems(MASs) based on tracking error observer under denial-of-service(DoS) attacks is investigated.Firstly,the dynamic pinning strategy is used to reconstruct the communication channel for the system that suffers from DoS attacks to prevent the discontinuous transmission information of the communication network from affecting MASs formation.Then,considering that the leader state is not available to each follower under DoS attacks,a fixed-time distributed observer without velocity information is constructed to estimate the tracking error between followers and the leader.Finally,adaptive radial basis function neural network(RBFNN) is used to approximate the unknown ensemble disturbances in the system,and the fixed-time time-varying formation scheme is designed with the constructed observer.The effectiveness of the proposed control algorithm is demonstrated by the numerical simulation.展开更多
Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NIS...Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.展开更多
To solve the finite-time error-tracking problem in mis-sile guidance,this paper presents a unified design approach through error dynamics and free-time convergence theory.The proposed approach is initiated by establis...To solve the finite-time error-tracking problem in mis-sile guidance,this paper presents a unified design approach through error dynamics and free-time convergence theory.The proposed approach is initiated by establishing a desired model for free-time convergent error dynamics,characterized by its independence from initial conditions and guidance parameters,and adjustable convergence time.This foundation facilitates the derivation of specific guidance laws that integrate constraints such as leading angle,impact angle,and impact time.The theoretical framework of this study elucidates the nuances and synergies between the proposed guidance laws and existing methodologies.Empirical evaluations through simulation comparisons underscore the enhanced accuracy and adaptability of the proposed laws.展开更多
This paper investigates the effect of the Phase Angle Error of a Constant Amplitude Voltage signal in determining the Total Vector Error (TVE) of the Phasor Measurement Unit (PMU) using MATLAB/Simulink. The phase angl...This paper investigates the effect of the Phase Angle Error of a Constant Amplitude Voltage signal in determining the Total Vector Error (TVE) of the Phasor Measurement Unit (PMU) using MATLAB/Simulink. The phase angle error is measured as a function of time in microseconds at four points on the IEEE 14-bus system. When the 1 pps Global Positioning System (GPS) signal to the PMU is lost, sampling of voltage signals on the power grid is done at different rates as it is a function of time. The relationship between the PMU measured signal phase angle and the sampling rate is established by injecting a constant amplitude signal at two different points on the grid. In the simulation, 64 cycles per second is used as the reference while 24 cycles per second is used to represent the fault condition. Results show that a change in the sampling rate from 64 bps to 24 bps in the PMUs resulted in phase angle error in the voltage signals measured by the PMU at four VI Measurement points. The phase angle error measurement that was determined as a time function was used to determine the TVE. Results show that (TVE) was more than 1% in all the cases.展开更多
In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al...In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al.in J Sci Comput 66:321–345,2016;Dong and Wang in J Comput Appl Math 380:1–11,2020)for a one-dimensional stationary Schrödinger equation.Previous work showed that penalty parameters were required to be positive in error analysis,but the methods with zero penalty parameters worked fine in numerical simulations on coarse meshes.In this work,by performing extensive numerical experiments,we discover that zero penalty parameters lead to resonance errors in the multiscale DG methods,and taking positive penalty parameters can effectively reduce resonance errors and make the matrix in the global linear system have better condition numbers.展开更多
With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial informati...With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial information grow, electronic documents have also proliferated. When dealing with numerous electronic documents and texts written by Chinese beginners, manually written texts often contain hidden grammatical errors, posing a significant challenge to traditional manual proofreading. Correcting these grammatical errors is crucial to ensure fluency and readability. However, certain special types of text grammar or logical errors can have a huge impact, and manually proofreading a large number of texts individually is clearly impractical. Consequently, research on text error correction techniques has garnered significant attention in recent years. The advent and advancement of deep learning have paved the way for sequence-to-sequence learning methods to be extensively applied to the task of text error correction. This paper presents a comprehensive analysis of Chinese text grammar error correction technology, elaborates on its current research status, discusses existing problems, proposes preliminary solutions, and conducts experiments using judicial documents as an example. The aim is to provide a feasible research approach for Chinese text error correction technology.展开更多
The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the...The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the power grid. This paper advances an online CVT error state evaluation method, anchored in the in-phase relationship and outlier detection. Initially, this method leverages the in-phase relationship to obviate the influence of primary side fluctuations in the grid on assessment accuracy. Subsequently, Principal Component Analysis (PCA) is employed to meticulously disentangle the error change information inherent in the CVT from the measured values and to compute statistics that delineate the error state. Finally, the Local Outlier Factor (LOF) is deployed to discern outliers in the statistics, with thresholds serving to appraise the CVT error state. Experimental results incontrovertibly demonstrate the efficacy of this method, showcasing its prowess in effecting online tracking of CVT error changes and conducting error state assessments. The discernible enhancements in reliability, accuracy, and sensitivity are manifest, with the assessment accuracy reaching an exemplary 0.01%.展开更多
This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviat...This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviation ratio of 1, was conducted for both small and large sample sizes. For small sample sizes, two main categories were established: equal and different sample sizes. Analyses were performed using Monte Carlo simulations with 20,000 repetitions for each scenario, and the simulations were evaluated using SAS software. For small sample sizes, the I. type error rate of the Siegel-Tukey test generally ranged from 0.045 to 0.055, while the I. type error rate of the Savage test was observed to range from 0.016 to 0.041. Similar trends were observed for Platykurtic and Skewed distributions. In scenarios with different sample sizes, the Savage test generally exhibited lower I. type error rates. For large sample sizes, two main categories were established: equal and different sample sizes. For large sample sizes, the I. type error rate of the Siegel-Tukey test ranged from 0.047 to 0.052, while the I. type error rate of the Savage test ranged from 0.043 to 0.051. In cases of equal sample sizes, both tests generally had lower error rates, with the Savage test providing more consistent results for large sample sizes. In conclusion, it was determined that the Savage test provides lower I. type error rates for small sample sizes and that both tests have similar error rates for large sample sizes. These findings suggest that the Savage test could be a more reliable option when analyzing variance differences.展开更多
基金Supported by the Innovation&Transfer Fund of Peking University Third Hospital(No.BYSYZHKC2021108).
文摘AIM:To investigate the influence of postoperative intraocular lens(IOL)positions on the accuracy of cataract surgery and examine the predictive factors of postoperative biometry prediction errors using the Barrett Universal II(BUII)IOL formula for calculation.METHODS:The prospective study included patients who had undergone cataract surgery performed by a single surgeon from June 2020 to April 2022.The collected data included the best-corrected visual acuity(BCVA),corneal curvature,preoperative and postoperative central anterior chamber depths(ACD),axial length(AXL),IOL power,and refractive error.BUII formula was used to calculate the IOL power.The mean absolute error(MAE)was calculated,and all the participants were divided into two groups accordingly.Independent t-tests were applied to compare the variables between groups.Logistic regression analysis was used to analyze the influence of age,AXL,corneal curvature,and preoperative and postoperative ACD on MAE.RESULTS:A total of 261 patients were enrolled.The 243(93.1%)and 18(6.9%)had postoperative MAE<1 and>1 D,respectively.The number of females was higher in patients with MAE>1 D(χ^(2)=3.833,P=0.039).The postoperative BCVA(logMAR)of patients with MAE>1 D was significantly worse(t=-2.448;P=0.025).After adjusting for gender in the logistic model,the risk of postoperative refractive errors was higher in patients with a shallow postoperative anterior chamber[odds ratio=0.346;95% confidence interval(CI):0.164,0.730,P=0.005].CONCLUSION:Risk factors for biometry prediction error after cataract surgery include the patient’s sex and postoperative ACD.Patients with a shallow postoperative anterior chamber are prone to have refractive errors.
基金supported in part by the National Key Research and Development Program of China under Grant 2024YFE0200600in part by the National Natural Science Foundation of China under Grant 62071425+3 种基金in part by the Zhejiang Key Research and Development Plan under Grant 2022C01093in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR23F010005in part by the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01601in part by the Big Data and Intelligent Computing Key Lab of CQUPT under Grant BDIC-2023-B-001.
文摘Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
文摘AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.
基金Supported by Natural Science Foundation of Shaanxi Province of China(Grant No.2021JM010)Suzhou Municipal Natural Science Foundation of China(Grant Nos.SYG202018,SYG202134).
文摘Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.
基金supported in part by the National Key R&D Program of China(2022YFC3401303)the Natural Science Foundation of Jiangsu Province(BK20211528)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KFCX22_2300)。
文摘In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.
文摘In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.
文摘In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.
文摘Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.
基金supported by the National Natural Science Foundation of China(No.U2142206).
文摘Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SMS-WARR(Shanghai Meteorological Service-WRF ADAS Rapid Refresh System),are analyzed to quantitatively reveal the relationships between the forecasted surface wind speed errors and terrain features,with the intent of providing clues to better apply the NWP model to complex terrain regions.The terrain features are described by three parameters:the standard deviation of the model grid-scale orography,terrain height error of the model,and slope angle.The results show that the forecast bias has a unimodal distribution with a change in the standard deviation of orography.The minimum ME(the mean value of bias)is 1.2 m s^(-1) when the standard deviation is between 60 and 70 m.A positive correlation exists between bias and terrain height error,with the ME increasing by 10%−30%for every 200 m increase in terrain height error.The ME decreases by 65.6%when slope angle increases from(0.5°−1.5°)to larger than 3.5°for uphill winds but increases by 35.4%when the absolute value of slope angle increases from(0.5°−1.5°)to(2.5°−3.5°)for downhill winds.Several sensitivity experiments are carried out with a model output statistical(MOS)calibration model for surface wind speeds and ME(RMSE)has been reduced by 90%(30%)by introducing terrain parameters,demonstrating the value of this study.
文摘In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of normalizing constants.It is shown that M_(n)^(p),when v=1,converges to the Frechet extreme value distribution at the rate of 1/n,and if v>1 then M_(n)^(p)converges to the Gumbel extreme value distribution at the rate of(loglogn)^(2)=(log n)^(1-1/v).
文摘AIM:To compare relative peripheral refractive errors(RPREs)in Chinese children with and without myopic anisometropia(MAI)and to explore the relationship between RPRE and myopia.METHODS:This observational cross-sectional study included 160 children divided into two groups according to the interocular spherical equivalent refraction(SER)difference≥1.0 D in the MAI group(n=80)and<1.0 D in the non-MAI group(n=80).The MAI group was further divided into two subgroups:ΔSER<2.0 D group and ΔSER≥2.0 D group.Basic ocular biometric parameters of axial length(AL),average keratometry(Ave K),cylinder(CYL),surface regularity index(SRI),and surface asymmetry index(SAI)were recorded.In addition,multispectral refraction topography was performed to measure RPRE,and the parameters were recorded as total refraction difference value(TRDV),refraction difference value(RDV)0-10,RDV10-20,RDV20-30,RDV30-40,RDV40-53,RDV-superior(RDV-S),RDV-inferior(RDV-I),RDV-temporal(RDV-T)and RDV-nasal(RDV-N).RESULTS:In the non-MAI group,the interocular differences of all parameters of RPRE were not significant.In the MAI group,the interocular differences of TRDV,RDV10-53,RDV-S,RDV-I,RDV-T,and RDV-N were significant.In subgroup analysis,the interocular differences of TRDV,RDV30-53,RDV-I,and RDV-T were significant in ΔSER<2.0 D group and ΔSER≥2.0 D group,but the interocular differences of RDV10-30,RDV-S and RDV-N were only significant in the ΔSER≥2.0 D group.In correlation analysis,ΔTRDV,ΔRDV 10-53,ΔRDV-S,and ΔRDV-N were negatively correlated with ΔSER but positively correlated with ΔAL.CONCLUSION:The more myopic eyes have larger hyperopic RPRE in Chinese children with MAI in certain retinal range,and partialΔRPRE is closely associated with ΔSER and ΔAL.
文摘In this paper,the fixed-time time-varying formation of heterogeneous multi-agent systems(MASs) based on tracking error observer under denial-of-service(DoS) attacks is investigated.Firstly,the dynamic pinning strategy is used to reconstruct the communication channel for the system that suffers from DoS attacks to prevent the discontinuous transmission information of the communication network from affecting MASs formation.Then,considering that the leader state is not available to each follower under DoS attacks,a fixed-time distributed observer without velocity information is constructed to estimate the tracking error between followers and the leader.Finally,adaptive radial basis function neural network(RBFNN) is used to approximate the unknown ensemble disturbances in the system,and the fixed-time time-varying formation scheme is designed with the constructed observer.The effectiveness of the proposed control algorithm is demonstrated by the numerical simulation.
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No.ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos.ZR2022LLZ012 and ZR2021LLZ001)。
文摘Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.
基金supported by the National Natural Science Foundation of China(12002370).
文摘To solve the finite-time error-tracking problem in mis-sile guidance,this paper presents a unified design approach through error dynamics and free-time convergence theory.The proposed approach is initiated by establishing a desired model for free-time convergent error dynamics,characterized by its independence from initial conditions and guidance parameters,and adjustable convergence time.This foundation facilitates the derivation of specific guidance laws that integrate constraints such as leading angle,impact angle,and impact time.The theoretical framework of this study elucidates the nuances and synergies between the proposed guidance laws and existing methodologies.Empirical evaluations through simulation comparisons underscore the enhanced accuracy and adaptability of the proposed laws.
文摘This paper investigates the effect of the Phase Angle Error of a Constant Amplitude Voltage signal in determining the Total Vector Error (TVE) of the Phasor Measurement Unit (PMU) using MATLAB/Simulink. The phase angle error is measured as a function of time in microseconds at four points on the IEEE 14-bus system. When the 1 pps Global Positioning System (GPS) signal to the PMU is lost, sampling of voltage signals on the power grid is done at different rates as it is a function of time. The relationship between the PMU measured signal phase angle and the sampling rate is established by injecting a constant amplitude signal at two different points on the grid. In the simulation, 64 cycles per second is used as the reference while 24 cycles per second is used to represent the fault condition. Results show that a change in the sampling rate from 64 bps to 24 bps in the PMUs resulted in phase angle error in the voltage signals measured by the PMU at four VI Measurement points. The phase angle error measurement that was determined as a time function was used to determine the TVE. Results show that (TVE) was more than 1% in all the cases.
基金supported by the National Science Foundation grant DMS-1818998.
文摘In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al.in J Sci Comput 66:321–345,2016;Dong and Wang in J Comput Appl Math 380:1–11,2020)for a one-dimensional stationary Schrödinger equation.Previous work showed that penalty parameters were required to be positive in error analysis,but the methods with zero penalty parameters worked fine in numerical simulations on coarse meshes.In this work,by performing extensive numerical experiments,we discover that zero penalty parameters lead to resonance errors in the multiscale DG methods,and taking positive penalty parameters can effectively reduce resonance errors and make the matrix in the global linear system have better condition numbers.
文摘With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial information grow, electronic documents have also proliferated. When dealing with numerous electronic documents and texts written by Chinese beginners, manually written texts often contain hidden grammatical errors, posing a significant challenge to traditional manual proofreading. Correcting these grammatical errors is crucial to ensure fluency and readability. However, certain special types of text grammar or logical errors can have a huge impact, and manually proofreading a large number of texts individually is clearly impractical. Consequently, research on text error correction techniques has garnered significant attention in recent years. The advent and advancement of deep learning have paved the way for sequence-to-sequence learning methods to be extensively applied to the task of text error correction. This paper presents a comprehensive analysis of Chinese text grammar error correction technology, elaborates on its current research status, discusses existing problems, proposes preliminary solutions, and conducts experiments using judicial documents as an example. The aim is to provide a feasible research approach for Chinese text error correction technology.
文摘The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the power grid. This paper advances an online CVT error state evaluation method, anchored in the in-phase relationship and outlier detection. Initially, this method leverages the in-phase relationship to obviate the influence of primary side fluctuations in the grid on assessment accuracy. Subsequently, Principal Component Analysis (PCA) is employed to meticulously disentangle the error change information inherent in the CVT from the measured values and to compute statistics that delineate the error state. Finally, the Local Outlier Factor (LOF) is deployed to discern outliers in the statistics, with thresholds serving to appraise the CVT error state. Experimental results incontrovertibly demonstrate the efficacy of this method, showcasing its prowess in effecting online tracking of CVT error changes and conducting error state assessments. The discernible enhancements in reliability, accuracy, and sensitivity are manifest, with the assessment accuracy reaching an exemplary 0.01%.
文摘This study aimed to examine the performance of the Siegel-Tukey and Savage tests on data sets with heterogeneous variances. The analysis, considering Normal, Platykurtic, and Skewed distributions and a standard deviation ratio of 1, was conducted for both small and large sample sizes. For small sample sizes, two main categories were established: equal and different sample sizes. Analyses were performed using Monte Carlo simulations with 20,000 repetitions for each scenario, and the simulations were evaluated using SAS software. For small sample sizes, the I. type error rate of the Siegel-Tukey test generally ranged from 0.045 to 0.055, while the I. type error rate of the Savage test was observed to range from 0.016 to 0.041. Similar trends were observed for Platykurtic and Skewed distributions. In scenarios with different sample sizes, the Savage test generally exhibited lower I. type error rates. For large sample sizes, two main categories were established: equal and different sample sizes. For large sample sizes, the I. type error rate of the Siegel-Tukey test ranged from 0.047 to 0.052, while the I. type error rate of the Savage test ranged from 0.043 to 0.051. In cases of equal sample sizes, both tests generally had lower error rates, with the Savage test providing more consistent results for large sample sizes. In conclusion, it was determined that the Savage test provides lower I. type error rates for small sample sizes and that both tests have similar error rates for large sample sizes. These findings suggest that the Savage test could be a more reliable option when analyzing variance differences.