Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NIS...Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.展开更多
In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al...In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al.in J Sci Comput 66:321–345,2016;Dong and Wang in J Comput Appl Math 380:1–11,2020)for a one-dimensional stationary Schrödinger equation.Previous work showed that penalty parameters were required to be positive in error analysis,but the methods with zero penalty parameters worked fine in numerical simulations on coarse meshes.In this work,by performing extensive numerical experiments,we discover that zero penalty parameters lead to resonance errors in the multiscale DG methods,and taking positive penalty parameters can effectively reduce resonance errors and make the matrix in the global linear system have better condition numbers.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an impr...To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an improved artificial bee colony algorithm without derivative and the bootstrap method to estimate the parameters and evaluate the accuracy of MAM error model.The improved artificial bee colony algorithm can update individuals in multiple dimensions and improve the cooperation ability between individuals by constructing a new search equation based on the idea of quasi-affine transformation.The experimental results show that based on the weighted least squares criterion,the algorithm can get the results consistent with the weighted least squares method without multiple formula derivation.The parameter estimation and accuracy evaluation method based on the bootstrap method can get better parameter estimation and more reasonable accuracy information than existing methods,which provides a new idea for the theory of parameter estimation and accuracy evaluation of the MAM error model.展开更多
A higher order boundary element method(HOBEM)is presented for inviscid flow passing cylinders in bounded or unbounded domain.The traditional boundary integral equation is established with respect to the velocity poten...A higher order boundary element method(HOBEM)is presented for inviscid flow passing cylinders in bounded or unbounded domain.The traditional boundary integral equation is established with respect to the velocity potential and its normal derivative.In present work,a new integral equation is derived for the tangential velocity.The boundary is discretized into higher order elements to ensure the continuity of slope at the element nodes.The velocity potential is also expanded with higher order shape functions,in which the unknown coefficients involve the tangential velocity.The expansion then ensures the continuities of the velocity and the slope of the boundary at element nodes.Through extensive comparison of the results for the analytical solution of cylinders,it is shown that the present HOBEM is much more accurate than the conventional BEM.展开更多
In this paper, we present the a posteriori error estimate of two-grid mixed finite element methods by averaging techniques for semilinear elliptic equations. We first propose the two-grid algorithms to linearize the m...In this paper, we present the a posteriori error estimate of two-grid mixed finite element methods by averaging techniques for semilinear elliptic equations. We first propose the two-grid algorithms to linearize the mixed method equations. Then, the averaging technique is used to construct the a posteriori error estimates of the two-grid mixed finite element method and theoretical analysis are given for the error estimators. Finally, we give some numerical examples to verify the reliability and efficiency of the a posteriori error estimator.展开更多
With the development of ultra-wide coverage technology,multibeam echo-sounder(MBES)system has put forward higher requirements for localization accuracy and computational efficiency of ray tracing method.The classical ...With the development of ultra-wide coverage technology,multibeam echo-sounder(MBES)system has put forward higher requirements for localization accuracy and computational efficiency of ray tracing method.The classical equivalent sound speed profile(ESSP)method replaces the measured sound velocity profile(SVP)with a simple constant gradient SVP,reducing the computational workload of beam positioning.However,in deep-sea environment,the depth measurement error of this method rapidly increases from the central beam to the edge beam.By analyzing the positioning error of the ESSP method at edge beam,it is discovered that the positioning error increases monotonically with the incident angle,and the relationship between them could be expressed by polynomial function.Therefore,an error correction algorithm based on polynomial fitting is obtained.The simulation experiment conducted on an inclined seafloor shows that the proposed algorithm exhibits comparable efficiency to the original ESSP method,while significantly improving bathymetry accuracy by nearly eight times in the edge beam.展开更多
Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which ent...Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which entails high complexity.To avoid the exact matrix inversion,a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed.By combining the advantages of both the explicit and the implicit matrix inversion,this paper introduces a new low-complexity signal detection algorithm.Firstly,the relationship between implicit and explicit techniques is analyzed.Then,an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems.The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration.However,its complexity is still high for higher iterations.Thus,it is applied only for first two iterations.For subsequent iterations,we propose a novel trace iterative method(TIM)based low-complexity algorithm,which has significantly lower complexity than higher Newton iterations.Convergence guarantees of the proposed detector are also provided.Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.展开更多
In this paper, we propose the nonconforming virtual element method (NCVEM) discretization for the pointwise control constraint optimal control problem governed by elliptic equations. Based on the NCVEM approximation o...In this paper, we propose the nonconforming virtual element method (NCVEM) discretization for the pointwise control constraint optimal control problem governed by elliptic equations. Based on the NCVEM approximation of state equation and the variational discretization of control variables, we construct a virtual element discrete scheme. For the state, adjoint state and control variable, we obtain the corresponding prior estimate in H<sup>1</sup> and L<sup>2</sup> norms. Finally, some numerical experiments are carried out to support the theoretical results.展开更多
The efficacy of error correction and various kinds of correction approaches is one of the key issues in second language writing faced by both teachers and researchers. The current paper reviews the definition of error...The efficacy of error correction and various kinds of correction approaches is one of the key issues in second language writing faced by both teachers and researchers. The current paper reviews the definition of error correction and examines the different views on whether error correction in L2 writing should be corrected. In particular, the paper discusses and analyses the three common correction methods: direct correction, peer feedback and indirect correction. Teachers are encouraged to weigh and analyze the advantages and disadvantages of these methods according to the current literature, employ the most beneficial error correction method in L2 writing, and adapt its suitability to their teaching context.展开更多
The influences of machining and misalignment errors play a very critical role in the performance of the anti-backlash double-roller enveloping hourglass worm gear(ADEHWG).However,a corresponding efficient method for e...The influences of machining and misalignment errors play a very critical role in the performance of the anti-backlash double-roller enveloping hourglass worm gear(ADEHWG).However,a corresponding efficient method for eliminating or reducing these errors on the tooth profile of the ADEHWG is seldom reported.The gear engagement equation and tooth profile equation for considering six different errors that could arise from the machining and gear misalignment are derived from the theories of differential geometry and gear meshing.Also,the tooth contact analysis(TCA) is used to systematically investigate the influence of the machining and misalignment errors on the contact curves and the tooth profile by means of numerical analysis and three-dimensional solid modeling.The research results show that vertical angular misalignment of the worm wheel(Δβ) has the strongest influences while the tooth angle error(Δα) has the weakest influences on the contact curves and the tooth profile.A novel efficient approach is proposed and used to minimize the effect of the errors in manufacturing by changing the radius of the grinding wheel and the approaching point of contact.The results from the TCA and the experiment demonstrate that this tooth profile design modification method can indeed reduce the machining and misalignment errors.This modification design method is helpful in understanding the manufacturing technology of the ADEHWG.展开更多
An H^1-Galerkin mixed finite element method is discussed for a class of second order SchrSdinger equation. Optimal error estimates of semidiscrete schemes are derived for problems in one space dimension. At the same t...An H^1-Galerkin mixed finite element method is discussed for a class of second order SchrSdinger equation. Optimal error estimates of semidiscrete schemes are derived for problems in one space dimension. At the same time, optimal error estimates are derived for fully discrete schemes. And it is showed that the H1-Galerkin mixed finite element approximations have the same rate of convergence as in the classical mixed finite element methods without requiring the LBB consistency condition.展开更多
The efficiency of an optimization method for acoustic emission/microseismic(AE/MS) source location is determined by the compatibility of its error definition with the errors contained in the input data.This compatib...The efficiency of an optimization method for acoustic emission/microseismic(AE/MS) source location is determined by the compatibility of its error definition with the errors contained in the input data.This compatibility can be examined in terms of the distribution of station residuals.For an ideal distribution,the input error is held at the station where it takes place as the station residual and the error is not permitted to spread to other stations.A comparison study of two optimization methods,namely the least squares method and the absolute value method,shows that the distribution with this character constrains the input errors and minimizes their impact,which explains the much more robust performance by the absolute value method in dealing with large and isolated input errors.When the errors in the input data are systematic and/or extreme in that the basic data structure is altered by these errors,none of the optimization methods are able to function.The only means to resolve this problem is the early detection and correction of these errors through a data screening process.An efficient data screening process is of primary importance for AE/MS source location.In addition to its critical role in dealing with those systematic and extreme errors,data screening creates a favorable environment for applying optimization methods.展开更多
Although there are some multi-sensor methods for measuring the straightness and tilt errors of a linear slideway, they need to be further improved in some aspects, such as suppressing measurement noise and reducing pr...Although there are some multi-sensor methods for measuring the straightness and tilt errors of a linear slideway, they need to be further improved in some aspects, such as suppressing measurement noise and reducing precondition.In this paper, a new four-sensor method with an improved measurement system is proposed to on-machine separate the straightness and tilt errors of a linear slideway from the sensor outputs, considering the influences of the reference surface profile and the zero-adjustment values. The improved system is achieved by adjusting a single sensor to di erent positions. Based on the system, a system of linear equations is built by fusing the sensor outputs to cancel out the e ects of the straightness and tilt errors. Three constraints are then derived and supplemented into the linear system to make the coe cient matrix full rank. To restrain the sensitivity of the solution of the linear system to the measurement noise in the sensor outputs, the Tikhonov regularization method is utilized. After the surface profile is obtained from the solution, the straightness and tilt errors are identified from the sensor outputs. To analyze the e ects of the measurement noise and the positioning errors of the sensor and the linear slideway, a series of computer simulations are carried out. An experiment is conducted for validation, showing good consistency. The new four-sensor method with the improved measurement system provides a new way to measure the straightness and tilt errors of a linear slideway, which can guarantee favorable propagations of the residuals induced by the noise and the positioning errors.展开更多
Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product speeifieation(GPS) requires the measurement uncertainty characterizing the reliability of the resul...Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product speeifieation(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.展开更多
In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can eff...In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model, Furthermore, in the ACE, the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors. The results of daily, decad and monthly prediction experiments on a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction, but is also better than that of the T63 model.展开更多
Due to various reasons, the inspection methods often need to be changed, and the detection reagents often need to be replaced. In this study, a comparative experiment was conducted between the ethanol-based and ether-...Due to various reasons, the inspection methods often need to be changed, and the detection reagents often need to be replaced. In this study, a comparative experiment was conducted between the ethanol-based and ether-based determination methods for oil content in imported wool. The determination results obtained from the two methods were treated as abscissa and ordinate respectively,and their linear relationship was analyzed. According to the linear regression analysis, the conversion equation of determination result between the two methods was obtained. In addition, the repeatability admissible error and reproducibility admissible error were established through analyzing the comparative experimental results by scientific software. This study will bring new ideas for further researches in this field, and provide reference for solving the similar problems in actual inspection work.展开更多
In this work,synchronous cutting of concave and convex surfaces was achieved using the duplex helical method for the hypoid gear,and the problem of tooth surface error correction was studied.First,the mathematical mod...In this work,synchronous cutting of concave and convex surfaces was achieved using the duplex helical method for the hypoid gear,and the problem of tooth surface error correction was studied.First,the mathematical model of the hypoid gears machined by the duplex helical method was established.Second,the coordinates of discrete points on the tooth surface were obtained by measurement center,and the normal errors of the discrete points were calculated.Third,a tooth surface error correction model is established,and the tooth surface error was corrected using the Levenberg-Marquard algorithm with trust region strategy and least square method.Finally,grinding experiments were carried out on the machining parameters obtained by Levenberg-Marquard algorithm with trust region strategy,which had a better effect on tooth surface error correction than the least square method.After the tooth surface error is corrected,the maximum absolute error is reduced from 30.9μm before correction to 6.8μm,the root mean square of the concave error is reduced from 15.1 to 2.1μm,the root mean square of the convex error is reduced from 10.8 to 1.8μm,and the sum of squared errors of the concave and convex surfaces was reduced from 15471 to 358μm^(2).It is verified that the Levenberg-Marquard algorithm with trust region strategy has a good accuracy for the tooth surface error correction of hypoid gear machined by duplex helical method.展开更多
This paper deals with a-posteriori error estimates for piecewise linear finite element approximations of parabolic problems in two space dimensions. The analysis extends previous results for elliptic problems to the p...This paper deals with a-posteriori error estimates for piecewise linear finite element approximations of parabolic problems in two space dimensions. The analysis extends previous results for elliptic problems to the parabolic context.展开更多
The subject of this work is to propose adaptive finite element methods based on an optimal maximum norm error control estimate.Using estimators of the local regularity of the unknown exact solution derived from comput...The subject of this work is to propose adaptive finite element methods based on an optimal maximum norm error control estimate.Using estimators of the local regularity of the unknown exact solution derived from computed approximate solutions,the proposed procedures are analyzed in detail for a non-trivial class of corner problems and shown to be efficient in the sense that they generate the correct type of refinement and lead to the desired control under consideration.展开更多
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No.ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos.ZR2022LLZ012 and ZR2021LLZ001)。
文摘Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.
基金supported by the National Science Foundation grant DMS-1818998.
文摘In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al.in J Sci Comput 66:321–345,2016;Dong and Wang in J Comput Appl Math 380:1–11,2020)for a one-dimensional stationary Schrödinger equation.Previous work showed that penalty parameters were required to be positive in error analysis,but the methods with zero penalty parameters worked fine in numerical simulations on coarse meshes.In this work,by performing extensive numerical experiments,we discover that zero penalty parameters lead to resonance errors in the multiscale DG methods,and taking positive penalty parameters can effectively reduce resonance errors and make the matrix in the global linear system have better condition numbers.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金supported by the National Natural Science Foundation of China(No.42174011 and No.41874001).
文摘To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an improved artificial bee colony algorithm without derivative and the bootstrap method to estimate the parameters and evaluate the accuracy of MAM error model.The improved artificial bee colony algorithm can update individuals in multiple dimensions and improve the cooperation ability between individuals by constructing a new search equation based on the idea of quasi-affine transformation.The experimental results show that based on the weighted least squares criterion,the algorithm can get the results consistent with the weighted least squares method without multiple formula derivation.The parameter estimation and accuracy evaluation method based on the bootstrap method can get better parameter estimation and more reasonable accuracy information than existing methods,which provides a new idea for the theory of parameter estimation and accuracy evaluation of the MAM error model.
基金financially supported by the National Natural Science Foundation of China (Grant Nos.52271276,52271319,and 52201364)the Natural Science Foundation of Jiangsu Province (Grant No.BK20201006)。
文摘A higher order boundary element method(HOBEM)is presented for inviscid flow passing cylinders in bounded or unbounded domain.The traditional boundary integral equation is established with respect to the velocity potential and its normal derivative.In present work,a new integral equation is derived for the tangential velocity.The boundary is discretized into higher order elements to ensure the continuity of slope at the element nodes.The velocity potential is also expanded with higher order shape functions,in which the unknown coefficients involve the tangential velocity.The expansion then ensures the continuities of the velocity and the slope of the boundary at element nodes.Through extensive comparison of the results for the analytical solution of cylinders,it is shown that the present HOBEM is much more accurate than the conventional BEM.
文摘In this paper, we present the a posteriori error estimate of two-grid mixed finite element methods by averaging techniques for semilinear elliptic equations. We first propose the two-grid algorithms to linearize the mixed method equations. Then, the averaging technique is used to construct the a posteriori error estimates of the two-grid mixed finite element method and theoretical analysis are given for the error estimators. Finally, we give some numerical examples to verify the reliability and efficiency of the a posteriori error estimator.
基金The Natural Science Foundation of Shandong Province of China under contract Nos ZR2022MA051 and ZR2020MA090the National Natural Science Foundation of China under contract No.U22A2012+2 种基金China Postdoctoral Science Foundation under contract No.2020M670891the SDUST Research Fund under contract No.2019TDJH103the Talent Introduction Plan for Youth Innovation Team in universities of Shandong Province(innovation team of satellite positioning and navigation)。
文摘With the development of ultra-wide coverage technology,multibeam echo-sounder(MBES)system has put forward higher requirements for localization accuracy and computational efficiency of ray tracing method.The classical equivalent sound speed profile(ESSP)method replaces the measured sound velocity profile(SVP)with a simple constant gradient SVP,reducing the computational workload of beam positioning.However,in deep-sea environment,the depth measurement error of this method rapidly increases from the central beam to the edge beam.By analyzing the positioning error of the ESSP method at edge beam,it is discovered that the positioning error increases monotonically with the incident angle,and the relationship between them could be expressed by polynomial function.Therefore,an error correction algorithm based on polynomial fitting is obtained.The simulation experiment conducted on an inclined seafloor shows that the proposed algorithm exhibits comparable efficiency to the original ESSP method,while significantly improving bathymetry accuracy by nearly eight times in the edge beam.
基金supported by National Natural Science Foundation of China(62371225,62371227)。
文摘Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which entails high complexity.To avoid the exact matrix inversion,a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed.By combining the advantages of both the explicit and the implicit matrix inversion,this paper introduces a new low-complexity signal detection algorithm.Firstly,the relationship between implicit and explicit techniques is analyzed.Then,an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems.The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration.However,its complexity is still high for higher iterations.Thus,it is applied only for first two iterations.For subsequent iterations,we propose a novel trace iterative method(TIM)based low-complexity algorithm,which has significantly lower complexity than higher Newton iterations.Convergence guarantees of the proposed detector are also provided.Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.
文摘In this paper, we propose the nonconforming virtual element method (NCVEM) discretization for the pointwise control constraint optimal control problem governed by elliptic equations. Based on the NCVEM approximation of state equation and the variational discretization of control variables, we construct a virtual element discrete scheme. For the state, adjoint state and control variable, we obtain the corresponding prior estimate in H<sup>1</sup> and L<sup>2</sup> norms. Finally, some numerical experiments are carried out to support the theoretical results.
文摘The efficacy of error correction and various kinds of correction approaches is one of the key issues in second language writing faced by both teachers and researchers. The current paper reviews the definition of error correction and examines the different views on whether error correction in L2 writing should be corrected. In particular, the paper discusses and analyses the three common correction methods: direct correction, peer feedback and indirect correction. Teachers are encouraged to weigh and analyze the advantages and disadvantages of these methods according to the current literature, employ the most beneficial error correction method in L2 writing, and adapt its suitability to their teaching context.
基金supported by National Natural Science Foundation of China(Grant Nos. 50775190No.51275425)+2 种基金Spring Sunshine Plan of Ministry of Education of China(Grant No. 10202258)Talent Introduction of Xihua UniversityChina(Grant No. Z1220217)
文摘The influences of machining and misalignment errors play a very critical role in the performance of the anti-backlash double-roller enveloping hourglass worm gear(ADEHWG).However,a corresponding efficient method for eliminating or reducing these errors on the tooth profile of the ADEHWG is seldom reported.The gear engagement equation and tooth profile equation for considering six different errors that could arise from the machining and gear misalignment are derived from the theories of differential geometry and gear meshing.Also,the tooth contact analysis(TCA) is used to systematically investigate the influence of the machining and misalignment errors on the contact curves and the tooth profile by means of numerical analysis and three-dimensional solid modeling.The research results show that vertical angular misalignment of the worm wheel(Δβ) has the strongest influences while the tooth angle error(Δα) has the weakest influences on the contact curves and the tooth profile.A novel efficient approach is proposed and used to minimize the effect of the errors in manufacturing by changing the radius of the grinding wheel and the approaching point of contact.The results from the TCA and the experiment demonstrate that this tooth profile design modification method can indeed reduce the machining and misalignment errors.This modification design method is helpful in understanding the manufacturing technology of the ADEHWG.
基金Supported by the National Natural Science Foundation of China (10601022)Natural Science Foundation of Inner Mongolia Autonomous Region (200607010106)Youth Science Foundation of Inner Mongolia University(ND0702)
文摘An H^1-Galerkin mixed finite element method is discussed for a class of second order SchrSdinger equation. Optimal error estimates of semidiscrete schemes are derived for problems in one space dimension. At the same time, optimal error estimates are derived for fully discrete schemes. And it is showed that the H1-Galerkin mixed finite element approximations have the same rate of convergence as in the classical mixed finite element methods without requiring the LBB consistency condition.
文摘The efficiency of an optimization method for acoustic emission/microseismic(AE/MS) source location is determined by the compatibility of its error definition with the errors contained in the input data.This compatibility can be examined in terms of the distribution of station residuals.For an ideal distribution,the input error is held at the station where it takes place as the station residual and the error is not permitted to spread to other stations.A comparison study of two optimization methods,namely the least squares method and the absolute value method,shows that the distribution with this character constrains the input errors and minimizes their impact,which explains the much more robust performance by the absolute value method in dealing with large and isolated input errors.When the errors in the input data are systematic and/or extreme in that the basic data structure is altered by these errors,none of the optimization methods are able to function.The only means to resolve this problem is the early detection and correction of these errors through a data screening process.An efficient data screening process is of primary importance for AE/MS source location.In addition to its critical role in dealing with those systematic and extreme errors,data screening creates a favorable environment for applying optimization methods.
基金Supported by National Natural Science Foundation of China(Grant No.51435006)
文摘Although there are some multi-sensor methods for measuring the straightness and tilt errors of a linear slideway, they need to be further improved in some aspects, such as suppressing measurement noise and reducing precondition.In this paper, a new four-sensor method with an improved measurement system is proposed to on-machine separate the straightness and tilt errors of a linear slideway from the sensor outputs, considering the influences of the reference surface profile and the zero-adjustment values. The improved system is achieved by adjusting a single sensor to di erent positions. Based on the system, a system of linear equations is built by fusing the sensor outputs to cancel out the e ects of the straightness and tilt errors. Three constraints are then derived and supplemented into the linear system to make the coe cient matrix full rank. To restrain the sensitivity of the solution of the linear system to the measurement noise in the sensor outputs, the Tikhonov regularization method is utilized. After the surface profile is obtained from the solution, the straightness and tilt errors are identified from the sensor outputs. To analyze the e ects of the measurement noise and the positioning errors of the sensor and the linear slideway, a series of computer simulations are carried out. An experiment is conducted for validation, showing good consistency. The new four-sensor method with the improved measurement system provides a new way to measure the straightness and tilt errors of a linear slideway, which can guarantee favorable propagations of the residuals induced by the noise and the positioning errors.
基金supported by National Natural Science Foundation of China (Grant No. 51075198)Jiangsu Provincial Natural Science Foundation of China (Grant No. BK2010479)+2 种基金Innovation Research of Nanjing Institute of Technology, China (Grant No. CKJ20100008)Jiangsu Provincial Foundation of 333 Talents Engineering of ChinaJiangsu Provincial Foundation of Six Talented Peak of China
文摘Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product speeifieation(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.
基金Project supported by the National Natural Science Foundation of China (Grant Nos 40575036 and 40325015).Acknowledgement The authors thank Drs Zhang Pei-Qun and Bao Ming very much for their valuable comments on the present paper.
文摘In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model, Furthermore, in the ACE, the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors. The results of daily, decad and monthly prediction experiments on a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction, but is also better than that of the T63 model.
文摘Due to various reasons, the inspection methods often need to be changed, and the detection reagents often need to be replaced. In this study, a comparative experiment was conducted between the ethanol-based and ether-based determination methods for oil content in imported wool. The determination results obtained from the two methods were treated as abscissa and ordinate respectively,and their linear relationship was analyzed. According to the linear regression analysis, the conversion equation of determination result between the two methods was obtained. In addition, the repeatability admissible error and reproducibility admissible error were established through analyzing the comparative experimental results by scientific software. This study will bring new ideas for further researches in this field, and provide reference for solving the similar problems in actual inspection work.
基金Projects(52075552,51575533,51805555,11662004)supported by the National Natural Science Foundation of China。
文摘In this work,synchronous cutting of concave and convex surfaces was achieved using the duplex helical method for the hypoid gear,and the problem of tooth surface error correction was studied.First,the mathematical model of the hypoid gears machined by the duplex helical method was established.Second,the coordinates of discrete points on the tooth surface were obtained by measurement center,and the normal errors of the discrete points were calculated.Third,a tooth surface error correction model is established,and the tooth surface error was corrected using the Levenberg-Marquard algorithm with trust region strategy and least square method.Finally,grinding experiments were carried out on the machining parameters obtained by Levenberg-Marquard algorithm with trust region strategy,which had a better effect on tooth surface error correction than the least square method.After the tooth surface error is corrected,the maximum absolute error is reduced from 30.9μm before correction to 6.8μm,the root mean square of the concave error is reduced from 15.1 to 2.1μm,the root mean square of the convex error is reduced from 10.8 to 1.8μm,and the sum of squared errors of the concave and convex surfaces was reduced from 15471 to 358μm^(2).It is verified that the Levenberg-Marquard algorithm with trust region strategy has a good accuracy for the tooth surface error correction of hypoid gear machined by duplex helical method.
文摘This paper deals with a-posteriori error estimates for piecewise linear finite element approximations of parabolic problems in two space dimensions. The analysis extends previous results for elliptic problems to the parabolic context.
文摘The subject of this work is to propose adaptive finite element methods based on an optimal maximum norm error control estimate.Using estimators of the local regularity of the unknown exact solution derived from computed approximate solutions,the proposed procedures are analyzed in detail for a non-trivial class of corner problems and shown to be efficient in the sense that they generate the correct type of refinement and lead to the desired control under consideration.