Gravity/inertial combination navigation is a leading issue in realizing passive navigation onboard a submarine. A new rotation-fitting gravity matching algorithm, based on the Terrain Contour Matching (TERCOM) algorit...Gravity/inertial combination navigation is a leading issue in realizing passive navigation onboard a submarine. A new rotation-fitting gravity matching algorithm, based on the Terrain Contour Matching (TERCOM) algorithm, is proposed in this paper. The algorithm is based on the principle of least mean-square-error criterion, and searches for a certain matched trajectory that runs parallel to a trace indicated by an inertial navigation system on a gravity base map. A rotation is then made clockwise or counterclockwise through a certain angle around the matched trajectory to look for an optimal matched trajectory within a certain angle span range, and through weighted fitting with another eight suboptimal matched trajectories, the endpoint of the fitted trajectory is considered the optimal matched position. In analysis of the algorithm reliability and matching error, the results from simulation indicate that the optimal position can be obtained effectively in real time, and the positioning accuracy improves by 35% and up to 1.05 nautical miles using the proposed algorithm compared with using the widely employed TERCOM and SITAN methods. Current gravity-aided navigation can benefit from implementation of this new algorithm in terms of better reliability and positioning accuracy.展开更多
As an extension of linear regression in functional data analysis,functional linear regression has been studied by many researchers and applied in various fields.However,in many cases,data is collected sequentially ove...As an extension of linear regression in functional data analysis,functional linear regression has been studied by many researchers and applied in various fields.However,in many cases,data is collected sequentially over time,for example the financial series,so it is necessary to consider the autocorrelated structure of errors in functional regression background.To this end,this paper considers a multiple functional linear model with autoregressive errors.Based on the functional principal component analysis,we apply the least square procedure to estimate the functional coeficients and autoregression coeficients.Under some regular conditions,we establish the asymptotic properties of the proposed estimators.A simulation study is conducted to investigate the finite sample performance of our estimators.A real example on China's weather data is applied to illustrate the validity of our model.展开更多
AIM:To investigate the influence of postoperative intraocular lens(IOL)positions on the accuracy of cataract surgery and examine the predictive factors of postoperative biometry prediction errors using the Barrett Uni...AIM:To investigate the influence of postoperative intraocular lens(IOL)positions on the accuracy of cataract surgery and examine the predictive factors of postoperative biometry prediction errors using the Barrett Universal II(BUII)IOL formula for calculation.METHODS:The prospective study included patients who had undergone cataract surgery performed by a single surgeon from June 2020 to April 2022.The collected data included the best-corrected visual acuity(BCVA),corneal curvature,preoperative and postoperative central anterior chamber depths(ACD),axial length(AXL),IOL power,and refractive error.BUII formula was used to calculate the IOL power.The mean absolute error(MAE)was calculated,and all the participants were divided into two groups accordingly.Independent t-tests were applied to compare the variables between groups.Logistic regression analysis was used to analyze the influence of age,AXL,corneal curvature,and preoperative and postoperative ACD on MAE.RESULTS:A total of 261 patients were enrolled.The 243(93.1%)and 18(6.9%)had postoperative MAE<1 and>1 D,respectively.The number of females was higher in patients with MAE>1 D(χ^(2)=3.833,P=0.039).The postoperative BCVA(logMAR)of patients with MAE>1 D was significantly worse(t=-2.448;P=0.025).After adjusting for gender in the logistic model,the risk of postoperative refractive errors was higher in patients with a shallow postoperative anterior chamber[odds ratio=0.346;95% confidence interval(CI):0.164,0.730,P=0.005].CONCLUSION:Risk factors for biometry prediction error after cataract surgery include the patient’s sex and postoperative ACD.Patients with a shallow postoperative anterior chamber are prone to have refractive errors.展开更多
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe...Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.展开更多
The attempt to obtain long-term observed data around some sea areas we concern is usually very hard or even impossible in practical offshore and ocean engineering situations. In this paper, by means of linear mean-squ...The attempt to obtain long-term observed data around some sea areas we concern is usually very hard or even impossible in practical offshore and ocean engineering situations. In this paper, by means of linear mean-square estimation method, a new way to extend short-term data to long-term ones is developed. The long-term data about concerning sea areas can be constructed via a series of long-term data obtained from neighbor oceanographic stations, through relevance analysis of different data series. It is effective to cover the insufficiency of time series prediction method's overdependence upon the length of data series, as well as the limitation of variable numbers adopted in multiple linear regression model. The storm surge data collected from three oceanographic stations located in Shandong Peninsula are taken as examples to analyze the number-selection effect of reference oceanographic stations(adjacent to the concerning sea area) and the correlation coefficients between sea sites which are selected for reference and for engineering projects construction respectively. By comparing the N-year return-period values which are calculated from observed raw data and processed data which are extended from finite data series by means of the linear mean-square estimation method, one can draw a conclusion that this method can give considerably good estimation in practical ocean engineering, in spite of different extreme value distributions about raw and processed data.展开更多
A nonlinear problem of mean-square approximation of a real nonnegative continuous function with respect to two variables by the modulus of double Fourier integral dependent on two real parameters with use of the smoot...A nonlinear problem of mean-square approximation of a real nonnegative continuous function with respect to two variables by the modulus of double Fourier integral dependent on two real parameters with use of the smoothing functional is studied. Finding the optimal solutions of this problem is reduced to solution of the Hammerstein type two-dimensional nonlinear integral equation. The numerical algorithms to find the branching lines and branching-off solutions of this equation are constructed and justified. Numerical examples are presented.展开更多
The mean-square radius of gyration <S^2>,the mean-square dipole moment <D^2>,the mean-square end-to-end distance <R^2> and their temperature coefficients of unsymmetrical disubstituted poly(methylphe...The mean-square radius of gyration <S^2>,the mean-square dipole moment <D^2>,the mean-square end-to-end distance <R^2> and their temperature coefficients of unsymmetrical disubstituted poly(methylphenylsiloxane) (PMPS) chains, as a function of stereochemical structure,confomational energies and length of polymers,were studied by using an improved configurational-confomational statistical method based on the rotational-isomeric-state theory.It is found that the increase in isotacticity of P...展开更多
We study the mean-square composite-rotating consensus problem of second-order multi-agent systems with communication noises, where all agents rotate around a common center and the center of rotation spins around a fix...We study the mean-square composite-rotating consensus problem of second-order multi-agent systems with communication noises, where all agents rotate around a common center and the center of rotation spins around a fixed point simultaneously. Firstly, a time-varying consensus gain is introduced to attenuate to the effect of communication noises. Secondly, sufficient conditions are obtained for achieving the mean-square composite-rotating consensus. Finally, simulations are provided to demonstrate the effectiveness of the proposed algorithm.展开更多
In this paper, we present a basic theory of mean-square almost periodicity, apply the theory in random differential equation, and obtain mean-square almost periodic solution of some types stochastic differential equat...In this paper, we present a basic theory of mean-square almost periodicity, apply the theory in random differential equation, and obtain mean-square almost periodic solution of some types stochastic differential equation.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was...AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.展开更多
The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable ...The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.展开更多
In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in...In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.展开更多
Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracer...Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.展开更多
In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally a...In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.展开更多
The theoretical lower bounds on mean squared channel estimation errors for typical fading channels are presented by the infinite-length and non-causal Wiener filter and the exact closed-form expressions of the lower b...The theoretical lower bounds on mean squared channel estimation errors for typical fading channels are presented by the infinite-length and non-causal Wiener filter and the exact closed-form expressions of the lower bounds for different channel Doppler spectra are derived. Based on the obtained lower bounds on mean squared channel estimation errors, the limits on bit error rate (BER) for maximal ratio combining (MRC) with Gaussian distributed weighting errors on independent and identically distributed (i. i. d) fading channels are presented. Numerical results show that the BER performances of ideal MRC are the lower bounds on the BER performances of non-ideal MRC and deteriorate as the maximum Doppler frequency increases or the SNR of channel estimate decreases.展开更多
In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution de...In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.展开更多
In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibrati...In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.展开更多
AIM:To describe the distribution of refractive errors by age and sex among schoolchildren in Soacha,Colombia.METHODS:This was an observational cross-sectional study conducted in five urban public schools in the munici...AIM:To describe the distribution of refractive errors by age and sex among schoolchildren in Soacha,Colombia.METHODS:This was an observational cross-sectional study conducted in five urban public schools in the municipality of Soacha.A total of 1161 school-aged and pre-adolescent children,aged 5-12y were examined during the school year 2021-2022.Examinations included visual acuity and static refraction.Spherical equivalent(SE)was analysed as follows:myopia SE≤-0.50 D and uncorrected visual acuity of 20/25 or worse;high myopia SE≤-6.00 D;hyperopia SE≥+1.00 D(≥7y)or SE≥+2.00 D(5-6y);significant hyperopia SE≥+3.00 D.Astigmatism was defined as a cylinder in at least one eye≥1.00 D(≥7y)or≥1.75 D(5-6y).If at least one eye was ametropic,children were classified according to the refractive error found.RESULTS:Of the 1139 schoolchildren included,50.6%were male,58.8%were aged between 5 and 9y,and 12.1%were already using optical correction.The most common refractive error was astigmatism(31.1%),followed by myopia(20.8%)and hyperopia(13.1%).There was no significant relationship between refractive error and sex.There was a significant increase in astigmatism(P<0.001)and myopia(P<0.0001)with age.CONCLUSION:Astigmatism is the most common refractive error in children in an urban area of Colombia.Emmetropia decreased and myopia increased with age.展开更多
Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school ch...Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school children aged 5 to 15 at CHU-IOTA. Patients and Method: This is a prospective, descriptive cross-sectional study carried out in the ophthalmic-pediatrics department of CHU-IOTA, from October to November 2023. Results: We received 340 school children aged 5 to 15, among whom 111 presented ametropia, i.e. a prevalence of 32.65%. The average age was 11.42 ± 2.75 years and a sex ratio of 0.59. The average visual acuity was 4/10 (range 1/10 and 10/10). We found refractive defects: astigmatism 73.87%, hyperopia 23.87% of cases and myopia 2.25%. The decline in distance visual acuity was the most common functional sign. Ocular abnormalities associated with ametropia were dominated by allergic conjunctivitis (26.13%) and papillary excavation (6.31%) in astigmatics;allergic conjunctivitis (9.01%) and papillary excavation (7.20%) in hyperopic patients;turbid vitreous (0.90%), myopic choroidosis (0.45%) and allergic conjunctivitis (0.45%) in myopes. Conclusion: Refractive errors constitute a reality and a major public health problem among school children.展开更多
基金supported by National Natural Science Foundation of China (Grant Nos. 41074051, 41021003 and 40874037)
文摘Gravity/inertial combination navigation is a leading issue in realizing passive navigation onboard a submarine. A new rotation-fitting gravity matching algorithm, based on the Terrain Contour Matching (TERCOM) algorithm, is proposed in this paper. The algorithm is based on the principle of least mean-square-error criterion, and searches for a certain matched trajectory that runs parallel to a trace indicated by an inertial navigation system on a gravity base map. A rotation is then made clockwise or counterclockwise through a certain angle around the matched trajectory to look for an optimal matched trajectory within a certain angle span range, and through weighted fitting with another eight suboptimal matched trajectories, the endpoint of the fitted trajectory is considered the optimal matched position. In analysis of the algorithm reliability and matching error, the results from simulation indicate that the optimal position can be obtained effectively in real time, and the positioning accuracy improves by 35% and up to 1.05 nautical miles using the proposed algorithm compared with using the widely employed TERCOM and SITAN methods. Current gravity-aided navigation can benefit from implementation of this new algorithm in terms of better reliability and positioning accuracy.
基金supported by National Nature Science Foundation of China(No.11861074,No.11371354 and N0.11301464)Key Laboratory of Random Complex Structures and Data Science,Chinese Academy of Sciences,Beijing 100190,China(No.2008DP173182)Applied Basic Research Project of Yunnan Province(No.2019FB138).
文摘As an extension of linear regression in functional data analysis,functional linear regression has been studied by many researchers and applied in various fields.However,in many cases,data is collected sequentially over time,for example the financial series,so it is necessary to consider the autocorrelated structure of errors in functional regression background.To this end,this paper considers a multiple functional linear model with autoregressive errors.Based on the functional principal component analysis,we apply the least square procedure to estimate the functional coeficients and autoregression coeficients.Under some regular conditions,we establish the asymptotic properties of the proposed estimators.A simulation study is conducted to investigate the finite sample performance of our estimators.A real example on China's weather data is applied to illustrate the validity of our model.
基金Supported by the Innovation&Transfer Fund of Peking University Third Hospital(No.BYSYZHKC2021108).
文摘AIM:To investigate the influence of postoperative intraocular lens(IOL)positions on the accuracy of cataract surgery and examine the predictive factors of postoperative biometry prediction errors using the Barrett Universal II(BUII)IOL formula for calculation.METHODS:The prospective study included patients who had undergone cataract surgery performed by a single surgeon from June 2020 to April 2022.The collected data included the best-corrected visual acuity(BCVA),corneal curvature,preoperative and postoperative central anterior chamber depths(ACD),axial length(AXL),IOL power,and refractive error.BUII formula was used to calculate the IOL power.The mean absolute error(MAE)was calculated,and all the participants were divided into two groups accordingly.Independent t-tests were applied to compare the variables between groups.Logistic regression analysis was used to analyze the influence of age,AXL,corneal curvature,and preoperative and postoperative ACD on MAE.RESULTS:A total of 261 patients were enrolled.The 243(93.1%)and 18(6.9%)had postoperative MAE<1 and>1 D,respectively.The number of females was higher in patients with MAE>1 D(χ^(2)=3.833,P=0.039).The postoperative BCVA(logMAR)of patients with MAE>1 D was significantly worse(t=-2.448;P=0.025).After adjusting for gender in the logistic model,the risk of postoperative refractive errors was higher in patients with a shallow postoperative anterior chamber[odds ratio=0.346;95% confidence interval(CI):0.164,0.730,P=0.005].CONCLUSION:Risk factors for biometry prediction error after cataract surgery include the patient’s sex and postoperative ACD.Patients with a shallow postoperative anterior chamber are prone to have refractive errors.
基金supported in part by the National Key Research and Development Program of China under Grant 2024YFE0200600in part by the National Natural Science Foundation of China under Grant 62071425+3 种基金in part by the Zhejiang Key Research and Development Plan under Grant 2022C01093in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR23F010005in part by the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01601in part by the Big Data and Intelligent Computing Key Lab of CQUPT under Grant BDIC-2023-B-001.
文摘Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.
基金financially supported by the National Natural Science Foundation of China(Grant Nos.51379195 and 41476078)the Natural Science Foundation of Shandong Province(Grant No.ZR2013EEM034)+2 种基金the Scientific Research Foundation of Science Technology Department of Zhejiang Province(Grant No.2015C34013)the Science Research Program of Zhoushan(Grant No.2014C41003)the Innovation Fund for Graduate Student of Shandong Province(Grant No.SDYY12152)
文摘The attempt to obtain long-term observed data around some sea areas we concern is usually very hard or even impossible in practical offshore and ocean engineering situations. In this paper, by means of linear mean-square estimation method, a new way to extend short-term data to long-term ones is developed. The long-term data about concerning sea areas can be constructed via a series of long-term data obtained from neighbor oceanographic stations, through relevance analysis of different data series. It is effective to cover the insufficiency of time series prediction method's overdependence upon the length of data series, as well as the limitation of variable numbers adopted in multiple linear regression model. The storm surge data collected from three oceanographic stations located in Shandong Peninsula are taken as examples to analyze the number-selection effect of reference oceanographic stations(adjacent to the concerning sea area) and the correlation coefficients between sea sites which are selected for reference and for engineering projects construction respectively. By comparing the N-year return-period values which are calculated from observed raw data and processed data which are extended from finite data series by means of the linear mean-square estimation method, one can draw a conclusion that this method can give considerably good estimation in practical ocean engineering, in spite of different extreme value distributions about raw and processed data.
文摘A nonlinear problem of mean-square approximation of a real nonnegative continuous function with respect to two variables by the modulus of double Fourier integral dependent on two real parameters with use of the smoothing functional is studied. Finding the optimal solutions of this problem is reduced to solution of the Hammerstein type two-dimensional nonlinear integral equation. The numerical algorithms to find the branching lines and branching-off solutions of this equation are constructed and justified. Numerical examples are presented.
基金supported by the National Basic Research Program (973) of China (No.10574109)the Zhejiang Provincial Science and Technology Department (No.2007G60G1120019)+1 种基金National Science Foundation of Zhejiang Province (No.Y604064)Zhejiang Gongshang University (No.08-13),China.
文摘The mean-square radius of gyration <S^2>,the mean-square dipole moment <D^2>,the mean-square end-to-end distance <R^2> and their temperature coefficients of unsymmetrical disubstituted poly(methylphenylsiloxane) (PMPS) chains, as a function of stereochemical structure,confomational energies and length of polymers,were studied by using an improved configurational-confomational statistical method based on the rotational-isomeric-state theory.It is found that the increase in isotacticity of P...
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61304155 and 11371049)Beijing Municipal Government Foundation for Talents,China(Grant No.2012D005003000005)
文摘We study the mean-square composite-rotating consensus problem of second-order multi-agent systems with communication noises, where all agents rotate around a common center and the center of rotation spins around a fixed point simultaneously. Firstly, a time-varying consensus gain is introduced to attenuate to the effect of communication noises. Secondly, sufficient conditions are obtained for achieving the mean-square composite-rotating consensus. Finally, simulations are provided to demonstrate the effectiveness of the proposed algorithm.
文摘In this paper, we present a basic theory of mean-square almost periodicity, apply the theory in random differential equation, and obtain mean-square almost periodic solution of some types stochastic differential equation.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
文摘AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.
基金the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the Interdisciplinary Innovation Fund of Natural Science,Nanchang University(Grant No.9167-28220007-YB2107).
文摘The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.
基金supported by the National Natural Science Foundation of China(61601147)the Beijing Natural Science Foundation(L182032)。
文摘In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.
基金Supported by Natural Science Foundation of Shaanxi Province of China(Grant No.2021JM010)Suzhou Municipal Natural Science Foundation of China(Grant Nos.SYG202018,SYG202134).
文摘Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.
基金supported in part by the National Key R&D Program of China(2022YFC3401303)the Natural Science Foundation of Jiangsu Province(BK20211528)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KFCX22_2300)。
文摘In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.
文摘The theoretical lower bounds on mean squared channel estimation errors for typical fading channels are presented by the infinite-length and non-causal Wiener filter and the exact closed-form expressions of the lower bounds for different channel Doppler spectra are derived. Based on the obtained lower bounds on mean squared channel estimation errors, the limits on bit error rate (BER) for maximal ratio combining (MRC) with Gaussian distributed weighting errors on independent and identically distributed (i. i. d) fading channels are presented. Numerical results show that the BER performances of ideal MRC are the lower bounds on the BER performances of non-ideal MRC and deteriorate as the maximum Doppler frequency increases or the SNR of channel estimate decreases.
文摘In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.
文摘In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.
基金Supported by the OneSight EssilorLuxottica Foundation.
文摘AIM:To describe the distribution of refractive errors by age and sex among schoolchildren in Soacha,Colombia.METHODS:This was an observational cross-sectional study conducted in five urban public schools in the municipality of Soacha.A total of 1161 school-aged and pre-adolescent children,aged 5-12y were examined during the school year 2021-2022.Examinations included visual acuity and static refraction.Spherical equivalent(SE)was analysed as follows:myopia SE≤-0.50 D and uncorrected visual acuity of 20/25 or worse;high myopia SE≤-6.00 D;hyperopia SE≥+1.00 D(≥7y)or SE≥+2.00 D(5-6y);significant hyperopia SE≥+3.00 D.Astigmatism was defined as a cylinder in at least one eye≥1.00 D(≥7y)or≥1.75 D(5-6y).If at least one eye was ametropic,children were classified according to the refractive error found.RESULTS:Of the 1139 schoolchildren included,50.6%were male,58.8%were aged between 5 and 9y,and 12.1%were already using optical correction.The most common refractive error was astigmatism(31.1%),followed by myopia(20.8%)and hyperopia(13.1%).There was no significant relationship between refractive error and sex.There was a significant increase in astigmatism(P<0.001)and myopia(P<0.0001)with age.CONCLUSION:Astigmatism is the most common refractive error in children in an urban area of Colombia.Emmetropia decreased and myopia increased with age.
文摘Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school children aged 5 to 15 at CHU-IOTA. Patients and Method: This is a prospective, descriptive cross-sectional study carried out in the ophthalmic-pediatrics department of CHU-IOTA, from October to November 2023. Results: We received 340 school children aged 5 to 15, among whom 111 presented ametropia, i.e. a prevalence of 32.65%. The average age was 11.42 ± 2.75 years and a sex ratio of 0.59. The average visual acuity was 4/10 (range 1/10 and 10/10). We found refractive defects: astigmatism 73.87%, hyperopia 23.87% of cases and myopia 2.25%. The decline in distance visual acuity was the most common functional sign. Ocular abnormalities associated with ametropia were dominated by allergic conjunctivitis (26.13%) and papillary excavation (6.31%) in astigmatics;allergic conjunctivitis (9.01%) and papillary excavation (7.20%) in hyperopic patients;turbid vitreous (0.90%), myopic choroidosis (0.45%) and allergic conjunctivitis (0.45%) in myopes. Conclusion: Refractive errors constitute a reality and a major public health problem among school children.