This paper presents a novel approach to identify and correct the gross errors in the microelectromechanical system (MEMS) gyroscope used in ground vehicles by means of time series analysis. According to the characte...This paper presents a novel approach to identify and correct the gross errors in the microelectromechanical system (MEMS) gyroscope used in ground vehicles by means of time series analysis. According to the characteristics of autocorrelation function (ACF) and partial autocorrelation function (PACF), an autoregressive integrated moving average (ARIMA) model is roughly constructed. The rough model is optimized by combining with Akaike's information criterion (A/C), and the parameters are estimated based on the least squares algorithm. After validation testing, the model is utilized to forecast the next output on the basis of the previous measurement. When the difference between the measurement and its prediction exceeds the defined threshold, the measurement is identified as a gross error and remedied by its prediction. A case study on the yaw rate is performed to illustrate the developed algorithm. Experimental results demonstrate that the proposed approach can effectively distinguish gross errors and make some reasonable remedies.展开更多
An NT-MT combined method based on nodal test (NT) and measurement test (MT) is developed for gross error detection and data reconciliation for industrial application. The NT-MT combined method makes use of both NT and...An NT-MT combined method based on nodal test (NT) and measurement test (MT) is developed for gross error detection and data reconciliation for industrial application. The NT-MT combined method makes use of both NT and MT tests and this combination helps to overcome the defects in the respective methods. It also avoids any artificial manipulation and eliminates the huge combinatorial problem that is created in the combined method based on the nodal test in the case of more than one gross error for a large process system. Serial compensation strategy is also used to avoid the decrease of the coefficient matrix rank during the computation of the proposed method. Simulation results show that the proposed method is very effective and possesses good performance.展开更多
Mixed integer linear programming (MILP) approach for simultaneous gross error detection and data reconciliation has been proved as an efficient way to adjust process data with material, energy, and other balance con...Mixed integer linear programming (MILP) approach for simultaneous gross error detection and data reconciliation has been proved as an efficient way to adjust process data with material, energy, and other balance constrains. But the efficiency will decrease significantly when this method is applled in a large-scale problem because there are too many binary variables involved. In this article, an improved method is proposed in order to gen- erate gross error candidates with reliability factors before data rectification. Candidates are used in the MILP objec- tive function to improve the efficiency and accuracy by reducing the number of binary variables and giving accurate weights for suspected gross errors candidates. Performance of this improved method is compared and discussed by applying the algorithm in a widely used industrial example.展开更多
Wavelet theory is efficient as an adequate tool for analyzing single epoch GPS deformation signal. Wavelet analysis technique on gross error detection and recovery is advanced. Criteria of wavelet function choosing an...Wavelet theory is efficient as an adequate tool for analyzing single epoch GPS deformation signal. Wavelet analysis technique on gross error detection and recovery is advanced. Criteria of wavelet function choosing and Mallat decomposition levels decision are discussed. An effective deformation signal extracting method is proposed, that is wavelet noise reduction technique considering gross error recovery, which combines wavelet multi-resolution gross error detection results. Time position recognizing of gross errors and their repairing performance are realized. In the experiment, compactly supported orthogonal wavelet with short support block is more efficient than the longer one when discerning gross errors, which can obtain more finely analyses. And the shape of discerned gross error of short support wavelet is simpler than that of the longer one. Meanwhile, the time scale is easier to identify.展开更多
This paper describes a broad perspective of the application of graph theory to establishment of GPS control networks whereby the GPS network is considered as a connected and directed graph with three components.In thi...This paper describes a broad perspective of the application of graph theory to establishment of GPS control networks whereby the GPS network is considered as a connected and directed graph with three components.In this algorithm the gross error detection is undertaken through loops of different spanning trees using the "Loop Law" in which the individual components Δ X, Δ Y and Δ Z sum up to zero.If the sum of the respective vector components ∑X,∑Y and ∑Z in a loop is not zero and if the error is beyond the tolerable limit (ε>w),it indicates the existence of gross errors in one of the baselines in the loop and therefore the baseline must be removed or re_observed.After successful screening of errors by graph theory,network adjustment can be carried out.In this paper,the GPS data from the control network established as reference system for the HP Dam at Baishan county in Liaoning province is presented to illustrate the algorithm.展开更多
The detection and identification of gross errors, especially measurement bias, plays a vital role in data reconciliation for nonlinear dynamic systems. Although parameter estimation method has been proved to be a pow-...The detection and identification of gross errors, especially measurement bias, plays a vital role in data reconciliation for nonlinear dynamic systems. Although parameter estimation method has been proved to be a pow-erful tool for bias identification, without a reliable and efficient bias detection strategy, the method is limited in ef-ficiency and cannot be applied widely. In this paper, a new bias detection strategy is constructed to detect the pres-ence of measurement bias and its occurrence time. With the help of this strategy, the number of parameters to be es-timated is greatly reduced, and sequential detections and iterations are also avoided. In addition, the number of de-cision variables of the optimization model is reduced, through which the influence of the parameters estimated is reduced. By incorporating the strategy into the parameter estimation model, a new methodology named IPEBD (Improved Parameter Estimation method with Bias Detection strategy) is constructed. Simulation studies on a con-tinuous stirred tank reactor (CSTR) and the Tennessee Eastman (TE) problem show that IPEBD is efficient for eliminating random errors, measurement biases and outliers contained in dynamic process data.展开更多
The dominant and recessive effect made by exceptional interferer is analyzed in measurement system based on responsive character, and the gross error model of fuzzy clustering based on fuzzy relation and fuzzy equipol...The dominant and recessive effect made by exceptional interferer is analyzed in measurement system based on responsive character, and the gross error model of fuzzy clustering based on fuzzy relation and fuzzy equipollance relation is built. The concept and calculate formula of fuzzy eccentricity are defined to deduce the evaluation rule and function ofgruss error, on the base of them, a fuzzy clustering method of separating and discriminating the gross error is found, utilized in the dynamic circular division measurement system, the method can identify and eliminate gross error in measured data, and reduce measured data dispersity. Experimental results indicate that the use of the method and model enables repetitive precision of the system to improve 80% higher than the foregoing system, to reach 3.5 s, and angle measurement error is less than 7 s.展开更多
Due to some shortcomings in the current multiple hypothesis solution separation advanced receiver autonomous integrity monitoring(MHSS ARAIM)algorithm,such as the weaker robustness,a number of computational subsets wi...Due to some shortcomings in the current multiple hypothesis solution separation advanced receiver autonomous integrity monitoring(MHSS ARAIM)algorithm,such as the weaker robustness,a number of computational subsets with the larger computational load,a method combining MHSS ARAIM with gross error detection is proposed in this paper.The gross error detection method is used to identify and eliminate the gross data in the original data first,then the MHSS ARAIM algorithm is used to deal with the data after the gross error detection.Therefore,this makes up for the weakness of the MHSS ARAIM algorithm.With the data processing and analysis from several international GNSS service(IGS)and international GNSS monitoring and assessment system(iGMAS)stations,the results show that this new algorithm is superior to MHSS ARAIM in the localizer performance with vertical guidance down to 200 feet service(LPV-200)when using GPS and BDS measure data.Under the assumption of a single-faulty satellite,the effective monitoring threshold(EMT)is improved about 22.47%and 9.63%,and the vertical protection level(VPL)is improved about 32.28%and 12.98%for GPS and BDS observations,respectively.Moreover,under the assumption of double-faulty satellites,the EMT is improved about 80.85%and 29.88%,and the VPL is improved about 49.66%and 18.24%for GPS and BDS observations,respectively.展开更多
Principle component analysis (PCA) based chi-square test is more sensitive to subtle gross errors and has greater power to correctly detect gross errors than classical chi-square test. However, classical principal c...Principle component analysis (PCA) based chi-square test is more sensitive to subtle gross errors and has greater power to correctly detect gross errors than classical chi-square test. However, classical principal com- ponent test (PCT) is non-robust and can be very sensitive to one or more outliers. In this paper, a Huber function liked robust weight factor was added in the collective chi-square test to eliminate the influence of gross errors on the PCT. Meanwhile, robust chi-square test was applied to modified simultaneous estimation of gross error (MSEGE) strategy to detect and identify multiple gross errors. Simulation results show that the proposed robust test can reduce the possibility of type Ⅱ errors effectively. Adding robust chi-square test into MSEGE does not obviously improve the power of multiple gross error identification, the proposed approach considers the influence of outliers on hypothesis statistic test and is more reasonable.展开更多
A novel mixed integer linear programming (NMILP) model for detection of gross errors is presented in this paper. Yamamura et al.(1988) designed a model for detection of gross errors and data reconciliation based on Ak...A novel mixed integer linear programming (NMILP) model for detection of gross errors is presented in this paper. Yamamura et al.(1988) designed a model for detection of gross errors and data reconciliation based on Akaike information cri- terion (AIC). But much computational cost is needed due to its combinational nature. A mixed integer linear programming (MILP) approach was performed to reduce the computational cost and enhance the robustness. But it loses the super performance of maximum likelihood estimation. To reduce the computational cost and have the merit of maximum likelihood estimation, the simultaneous data reconciliation method in an MILP framework is decomposed and replaced by an NMILP subproblem and a quadratic programming (QP) or a least squares estimation (LSE) subproblem. Simulation result of an industrial case shows the high efficiency of the method.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was...AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.展开更多
The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable ...The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.展开更多
Gross primary productivity(GPP)of vegetation is an important constituent of the terrestrial carbon sinks and is significantly influenced by drought.Understanding the impact of droughts on different types of vegetation...Gross primary productivity(GPP)of vegetation is an important constituent of the terrestrial carbon sinks and is significantly influenced by drought.Understanding the impact of droughts on different types of vegetation GPP provides insight into the spatiotemporal variation of terrestrial carbon sinks,aiding efforts to mitigate the detrimental effects of climate change.In this study,we utilized the precipitation and temperature data from the Climatic Research Unit,the standardized precipitation evapotranspiration index(SPEI),the standardized precipitation index(SPI),and the simulated vegetation GPP using the eddy covariance-light use efficiency(EC-LUE)model to analyze the spatiotemporal change of GPP and its response to different drought indices in the Mongolian Plateau during 1982-2018.The main findings indicated that vegetation GPP decreased in 50.53% of the plateau,mainly in its northern and northeastern parts,while it increased in the remaining 49.47%area.Specifically,meadow steppe(78.92%)and deciduous forest(79.46%)witnessed a significant decrease in vegetation GPP,while alpine steppe(75.08%),cropland(76.27%),and sandy vegetation(87.88%)recovered well.Warming aridification areas accounted for 71.39% of the affected areas,while 28.53% of the areas underwent severe aridification,mainly located in the south and central regions.Notably,the warming aridification areas of desert steppe(92.68%)and sandy vegetation(90.24%)were significant.Climate warming was found to amplify the sensitivity of coniferous forest,deciduous forest,meadow steppe,and alpine steppe GPP to drought.Additionally,the drought sensitivity of vegetation GPP in the Mongolian Plateau gradually decreased as altitude increased.The cumulative effect of drought on vegetation GPP persisted for 3.00-8.00 months.The findings of this study will improve the understanding of how drought influences vegetation in arid and semi-arid areas.展开更多
In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in...In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.展开更多
Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracer...Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.展开更多
In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally a...In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.展开更多
基金The National Natural Science Foundation of China(No.61273236)the Natural Science Foundation of Jiangsu Province(No.BK2010239)the Ph.D.Programs Foundation of Ministry of Education of China(No.200802861061)
文摘This paper presents a novel approach to identify and correct the gross errors in the microelectromechanical system (MEMS) gyroscope used in ground vehicles by means of time series analysis. According to the characteristics of autocorrelation function (ACF) and partial autocorrelation function (PACF), an autoregressive integrated moving average (ARIMA) model is roughly constructed. The rough model is optimized by combining with Akaike's information criterion (A/C), and the parameters are estimated based on the least squares algorithm. After validation testing, the model is utilized to forecast the next output on the basis of the previous measurement. When the difference between the measurement and its prediction exceeds the defined threshold, the measurement is identified as a gross error and remedied by its prediction. A case study on the yaw rate is performed to illustrate the developed algorithm. Experimental results demonstrate that the proposed approach can effectively distinguish gross errors and make some reasonable remedies.
基金Supported by the National Creative Research Groups Science Foundation of China (No.60421002) and the National "TenthFive-Year" Science and Technology Research Program of China (2004BA204B08).
文摘An NT-MT combined method based on nodal test (NT) and measurement test (MT) is developed for gross error detection and data reconciliation for industrial application. The NT-MT combined method makes use of both NT and MT tests and this combination helps to overcome the defects in the respective methods. It also avoids any artificial manipulation and eliminates the huge combinatorial problem that is created in the combined method based on the nodal test in the case of more than one gross error for a large process system. Serial compensation strategy is also used to avoid the decrease of the coefficient matrix rank during the computation of the proposed method. Simulation results show that the proposed method is very effective and possesses good performance.
基金Supported by the National High Technology Research and Development Program of China (2007AA40702 and 2007AA04Z191)
文摘Mixed integer linear programming (MILP) approach for simultaneous gross error detection and data reconciliation has been proved as an efficient way to adjust process data with material, energy, and other balance constrains. But the efficiency will decrease significantly when this method is applled in a large-scale problem because there are too many binary variables involved. In this article, an improved method is proposed in order to gen- erate gross error candidates with reliability factors before data rectification. Candidates are used in the MILP objec- tive function to improve the efficiency and accuracy by reducing the number of binary variables and giving accurate weights for suspected gross errors candidates. Performance of this improved method is compared and discussed by applying the algorithm in a widely used industrial example.
基金Supported by National High Technology Research and Development Program of China (863 Program) (2006AA040308), National Natural Science Foundation of China (60736021), and the National Creative Research Groups Science Foundation of China (60721062)
基金Supported by Specialized Research Fundfor the Doctoral Programof Higher Educationin China(No.20040290503) Science and Technology Fundationof CUMT(No.2005B020) .
文摘Wavelet theory is efficient as an adequate tool for analyzing single epoch GPS deformation signal. Wavelet analysis technique on gross error detection and recovery is advanced. Criteria of wavelet function choosing and Mallat decomposition levels decision are discussed. An effective deformation signal extracting method is proposed, that is wavelet noise reduction technique considering gross error recovery, which combines wavelet multi-resolution gross error detection results. Time position recognizing of gross errors and their repairing performance are realized. In the experiment, compactly supported orthogonal wavelet with short support block is more efficient than the longer one when discerning gross errors, which can obtain more finely analyses. And the shape of discerned gross error of short support wavelet is simpler than that of the longer one. Meanwhile, the time scale is easier to identify.
文摘This paper describes a broad perspective of the application of graph theory to establishment of GPS control networks whereby the GPS network is considered as a connected and directed graph with three components.In this algorithm the gross error detection is undertaken through loops of different spanning trees using the "Loop Law" in which the individual components Δ X, Δ Y and Δ Z sum up to zero.If the sum of the respective vector components ∑X,∑Y and ∑Z in a loop is not zero and if the error is beyond the tolerable limit (ε>w),it indicates the existence of gross errors in one of the baselines in the loop and therefore the baseline must be removed or re_observed.After successful screening of errors by graph theory,network adjustment can be carried out.In this paper,the GPS data from the control network established as reference system for the HP Dam at Baishan county in Liaoning province is presented to illustrate the algorithm.
基金Supported by the National High Technology Research and Development Program of China (2006AA04Z176)
文摘The detection and identification of gross errors, especially measurement bias, plays a vital role in data reconciliation for nonlinear dynamic systems. Although parameter estimation method has been proved to be a pow-erful tool for bias identification, without a reliable and efficient bias detection strategy, the method is limited in ef-ficiency and cannot be applied widely. In this paper, a new bias detection strategy is constructed to detect the pres-ence of measurement bias and its occurrence time. With the help of this strategy, the number of parameters to be es-timated is greatly reduced, and sequential detections and iterations are also avoided. In addition, the number of de-cision variables of the optimization model is reduced, through which the influence of the parameters estimated is reduced. By incorporating the strategy into the parameter estimation model, a new methodology named IPEBD (Improved Parameter Estimation method with Bias Detection strategy) is constructed. Simulation studies on a con-tinuous stirred tank reactor (CSTR) and the Tennessee Eastman (TE) problem show that IPEBD is efficient for eliminating random errors, measurement biases and outliers contained in dynamic process data.
基金This project is supported by National Natural Science Foundation of China (No.59575081,No.59735120).
文摘The dominant and recessive effect made by exceptional interferer is analyzed in measurement system based on responsive character, and the gross error model of fuzzy clustering based on fuzzy relation and fuzzy equipollance relation is built. The concept and calculate formula of fuzzy eccentricity are defined to deduce the evaluation rule and function ofgruss error, on the base of them, a fuzzy clustering method of separating and discriminating the gross error is found, utilized in the dynamic circular division measurement system, the method can identify and eliminate gross error in measured data, and reduce measured data dispersity. Experimental results indicate that the use of the method and model enables repetitive precision of the system to improve 80% higher than the foregoing system, to reach 3.5 s, and angle measurement error is less than 7 s.
基金National Natural Science Foundation of China(No.4130403341504006+2 种基金41604001)The Grand Projects of the Beidou-2 System(No.GFZX0301040308)The Foundation of State Key Laboratory of Geo-information Engineering(No.SKLGIE2017-Z-2-1)。
文摘Due to some shortcomings in the current multiple hypothesis solution separation advanced receiver autonomous integrity monitoring(MHSS ARAIM)algorithm,such as the weaker robustness,a number of computational subsets with the larger computational load,a method combining MHSS ARAIM with gross error detection is proposed in this paper.The gross error detection method is used to identify and eliminate the gross data in the original data first,then the MHSS ARAIM algorithm is used to deal with the data after the gross error detection.Therefore,this makes up for the weakness of the MHSS ARAIM algorithm.With the data processing and analysis from several international GNSS service(IGS)and international GNSS monitoring and assessment system(iGMAS)stations,the results show that this new algorithm is superior to MHSS ARAIM in the localizer performance with vertical guidance down to 200 feet service(LPV-200)when using GPS and BDS measure data.Under the assumption of a single-faulty satellite,the effective monitoring threshold(EMT)is improved about 22.47%and 9.63%,and the vertical protection level(VPL)is improved about 32.28%and 12.98%for GPS and BDS observations,respectively.Moreover,under the assumption of double-faulty satellites,the EMT is improved about 80.85%and 29.88%,and the VPL is improved about 49.66%and 18.24%for GPS and BDS observations,respectively.
基金The National Natural Science Foundation of China(No 60504033)
文摘Principle component analysis (PCA) based chi-square test is more sensitive to subtle gross errors and has greater power to correctly detect gross errors than classical chi-square test. However, classical principal com- ponent test (PCT) is non-robust and can be very sensitive to one or more outliers. In this paper, a Huber function liked robust weight factor was added in the collective chi-square test to eliminate the influence of gross errors on the PCT. Meanwhile, robust chi-square test was applied to modified simultaneous estimation of gross error (MSEGE) strategy to detect and identify multiple gross errors. Simulation results show that the proposed robust test can reduce the possibility of type Ⅱ errors effectively. Adding robust chi-square test into MSEGE does not obviously improve the power of multiple gross error identification, the proposed approach considers the influence of outliers on hypothesis statistic test and is more reasonable.
基金Project supported by the National Creative Research Groups Science Foundation of China (No. 60421002)the National "Tenth Five-Year" Science and Technology Research Program of China (No.2004BA204B08)
文摘A novel mixed integer linear programming (NMILP) model for detection of gross errors is presented in this paper. Yamamura et al.(1988) designed a model for detection of gross errors and data reconciliation based on Akaike information cri- terion (AIC). But much computational cost is needed due to its combinational nature. A mixed integer linear programming (MILP) approach was performed to reduce the computational cost and enhance the robustness. But it loses the super performance of maximum likelihood estimation. To reduce the computational cost and have the merit of maximum likelihood estimation, the simultaneous data reconciliation method in an MILP framework is decomposed and replaced by an NMILP subproblem and a quadratic programming (QP) or a least squares estimation (LSE) subproblem. Simulation result of an industrial case shows the high efficiency of the method.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
文摘AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.
基金the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the Interdisciplinary Innovation Fund of Natural Science,Nanchang University(Grant No.9167-28220007-YB2107).
文摘The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.
基金jointly supported by the National Natural Science Foundation of China(42361024,42101030,42261079,and 41961058)the Talent Project of Science and Technology in Inner Mongolia of China(NJYT22027 and NJYT23019)the Fundamental Research Funds for the Inner Mongolia Normal University,China(2022JBBJ014 and 2022JBQN093)。
文摘Gross primary productivity(GPP)of vegetation is an important constituent of the terrestrial carbon sinks and is significantly influenced by drought.Understanding the impact of droughts on different types of vegetation GPP provides insight into the spatiotemporal variation of terrestrial carbon sinks,aiding efforts to mitigate the detrimental effects of climate change.In this study,we utilized the precipitation and temperature data from the Climatic Research Unit,the standardized precipitation evapotranspiration index(SPEI),the standardized precipitation index(SPI),and the simulated vegetation GPP using the eddy covariance-light use efficiency(EC-LUE)model to analyze the spatiotemporal change of GPP and its response to different drought indices in the Mongolian Plateau during 1982-2018.The main findings indicated that vegetation GPP decreased in 50.53% of the plateau,mainly in its northern and northeastern parts,while it increased in the remaining 49.47%area.Specifically,meadow steppe(78.92%)and deciduous forest(79.46%)witnessed a significant decrease in vegetation GPP,while alpine steppe(75.08%),cropland(76.27%),and sandy vegetation(87.88%)recovered well.Warming aridification areas accounted for 71.39% of the affected areas,while 28.53% of the areas underwent severe aridification,mainly located in the south and central regions.Notably,the warming aridification areas of desert steppe(92.68%)and sandy vegetation(90.24%)were significant.Climate warming was found to amplify the sensitivity of coniferous forest,deciduous forest,meadow steppe,and alpine steppe GPP to drought.Additionally,the drought sensitivity of vegetation GPP in the Mongolian Plateau gradually decreased as altitude increased.The cumulative effect of drought on vegetation GPP persisted for 3.00-8.00 months.The findings of this study will improve the understanding of how drought influences vegetation in arid and semi-arid areas.
基金supported by the National Natural Science Foundation of China(61601147)the Beijing Natural Science Foundation(L182032)。
文摘In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.
基金Supported by Natural Science Foundation of Shaanxi Province of China(Grant No.2021JM010)Suzhou Municipal Natural Science Foundation of China(Grant Nos.SYG202018,SYG202134).
文摘Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.
基金supported in part by the National Key R&D Program of China(2022YFC3401303)the Natural Science Foundation of Jiangsu Province(BK20211528)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KFCX22_2300)。
文摘In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.