In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NIS...Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.展开更多
In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al...In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al.in J Sci Comput 66:321–345,2016;Dong and Wang in J Comput Appl Math 380:1–11,2020)for a one-dimensional stationary Schrödinger equation.Previous work showed that penalty parameters were required to be positive in error analysis,but the methods with zero penalty parameters worked fine in numerical simulations on coarse meshes.In this work,by performing extensive numerical experiments,we discover that zero penalty parameters lead to resonance errors in the multiscale DG methods,and taking positive penalty parameters can effectively reduce resonance errors and make the matrix in the global linear system have better condition numbers.展开更多
Objective To explore the differences in three different registration methods of cone beam computed tomography(CBCT)-guided down-regulated intense radiation therapy for lung cancer as well as the effects of tumor locat...Objective To explore the differences in three different registration methods of cone beam computed tomography(CBCT)-guided down-regulated intense radiation therapy for lung cancer as well as the effects of tumor location,treatment mode,and tumor size on registration.Methods This retrospective analysis included 80 lung cancer patients undergoing radiotherapy in our hospital from November 2017 to October 2019 and compared automatic bone registration,automatic grayscale(t+r)registration,and automatic grayscale(t)positioning error on the X-,Y-,and Z-axes under three types of registration methods.The patients were also grouped according to tumor position,treatment mode,and tumor size to compare positioning errors.Results On the X-,Y-,and Z-axes,automatic grayscale(t+r)and automatic grayscale(t)registration showed a better trend.Analysis of the different treatment modes showed differences in the three registration methods;however,these were not statistically significant.Analysis according to tumor sizes showed significant differences between the three registration methods(P<0.05).Analysis according to tumor positions showed differences in the X-and Y-axes that were not significant(P>0.05),while the autopsy registration in the Z-axis showed the largest difference in the mediastinal and hilar lymph nodes(P<0.05).Conclusion The treatment mode was not the main factor affecting registration error in lung cancer.Three registration methods are available for tumors in the upper and lower lungs measuring<3 cm;among these,automatic gray registration is recommended,while any gray registration method is recommended for tumors located in the mediastinal hilar site measuring<3 cm and in the upper and lower lungs≥3 cm.展开更多
Based on the characteristics of the domestic sewage in rural district,four sewage treatment methods were analyzed.The results found that the optimum method was to process the domestic sewage on the spot,and it should ...Based on the characteristics of the domestic sewage in rural district,four sewage treatment methods were analyzed.The results found that the optimum method was to process the domestic sewage on the spot,and it should be popularized and applied in rural area.展开更多
The efficacy of error correction and various kinds of correction approaches is one of the key issues in second language writing faced by both teachers and researchers. The current paper reviews the definition of error...The efficacy of error correction and various kinds of correction approaches is one of the key issues in second language writing faced by both teachers and researchers. The current paper reviews the definition of error correction and examines the different views on whether error correction in L2 writing should be corrected. In particular, the paper discusses and analyses the three common correction methods: direct correction, peer feedback and indirect correction. Teachers are encouraged to weigh and analyze the advantages and disadvantages of these methods according to the current literature, employ the most beneficial error correction method in L2 writing, and adapt its suitability to their teaching context.展开更多
Usually, Chinese EFL students make satisfactory progress in reading, grammar and writing. However, it is very difficult for them to transmit messages, exchange thoughts in English. I conducted a questionnaire in Inner...Usually, Chinese EFL students make satisfactory progress in reading, grammar and writing. However, it is very difficult for them to transmit messages, exchange thoughts in English. I conducted a questionnaire in Inner Mongolia Teachers’ University and found out that when the students come to speak English, they tend to appear frustrated, lack confidence on the grounds of fear of making errors, which may be closely related to the teachers’ attitudes towards students’ errors. Therefore, it is essential for English teachers to have proper attitudes to errors in English teaching. The paper discusses some important strategies on whether, when and how speaking errors should be corrected based on some researchers’ views and on a small scale of experiment for the purpose of helping teachers deal with students’ errors effectively in communication.展开更多
The influences of machining and misalignment errors play a very critical role in the performance of the anti-backlash double-roller enveloping hourglass worm gear(ADEHWG).However,a corresponding efficient method for e...The influences of machining and misalignment errors play a very critical role in the performance of the anti-backlash double-roller enveloping hourglass worm gear(ADEHWG).However,a corresponding efficient method for eliminating or reducing these errors on the tooth profile of the ADEHWG is seldom reported.The gear engagement equation and tooth profile equation for considering six different errors that could arise from the machining and gear misalignment are derived from the theories of differential geometry and gear meshing.Also,the tooth contact analysis(TCA) is used to systematically investigate the influence of the machining and misalignment errors on the contact curves and the tooth profile by means of numerical analysis and three-dimensional solid modeling.The research results show that vertical angular misalignment of the worm wheel(Δβ) has the strongest influences while the tooth angle error(Δα) has the weakest influences on the contact curves and the tooth profile.A novel efficient approach is proposed and used to minimize the effect of the errors in manufacturing by changing the radius of the grinding wheel and the approaching point of contact.The results from the TCA and the experiment demonstrate that this tooth profile design modification method can indeed reduce the machining and misalignment errors.This modification design method is helpful in understanding the manufacturing technology of the ADEHWG.展开更多
An H^1-Galerkin mixed finite element method is discussed for a class of second order SchrSdinger equation. Optimal error estimates of semidiscrete schemes are derived for problems in one space dimension. At the same t...An H^1-Galerkin mixed finite element method is discussed for a class of second order SchrSdinger equation. Optimal error estimates of semidiscrete schemes are derived for problems in one space dimension. At the same time, optimal error estimates are derived for fully discrete schemes. And it is showed that the H1-Galerkin mixed finite element approximations have the same rate of convergence as in the classical mixed finite element methods without requiring the LBB consistency condition.展开更多
The efficiency of an optimization method for acoustic emission/microseismic(AE/MS) source location is determined by the compatibility of its error definition with the errors contained in the input data.This compatib...The efficiency of an optimization method for acoustic emission/microseismic(AE/MS) source location is determined by the compatibility of its error definition with the errors contained in the input data.This compatibility can be examined in terms of the distribution of station residuals.For an ideal distribution,the input error is held at the station where it takes place as the station residual and the error is not permitted to spread to other stations.A comparison study of two optimization methods,namely the least squares method and the absolute value method,shows that the distribution with this character constrains the input errors and minimizes their impact,which explains the much more robust performance by the absolute value method in dealing with large and isolated input errors.When the errors in the input data are systematic and/or extreme in that the basic data structure is altered by these errors,none of the optimization methods are able to function.The only means to resolve this problem is the early detection and correction of these errors through a data screening process.An efficient data screening process is of primary importance for AE/MS source location.In addition to its critical role in dealing with those systematic and extreme errors,data screening creates a favorable environment for applying optimization methods.展开更多
Although there are some multi-sensor methods for measuring the straightness and tilt errors of a linear slideway, they need to be further improved in some aspects, such as suppressing measurement noise and reducing pr...Although there are some multi-sensor methods for measuring the straightness and tilt errors of a linear slideway, they need to be further improved in some aspects, such as suppressing measurement noise and reducing precondition.In this paper, a new four-sensor method with an improved measurement system is proposed to on-machine separate the straightness and tilt errors of a linear slideway from the sensor outputs, considering the influences of the reference surface profile and the zero-adjustment values. The improved system is achieved by adjusting a single sensor to di erent positions. Based on the system, a system of linear equations is built by fusing the sensor outputs to cancel out the e ects of the straightness and tilt errors. Three constraints are then derived and supplemented into the linear system to make the coe cient matrix full rank. To restrain the sensitivity of the solution of the linear system to the measurement noise in the sensor outputs, the Tikhonov regularization method is utilized. After the surface profile is obtained from the solution, the straightness and tilt errors are identified from the sensor outputs. To analyze the e ects of the measurement noise and the positioning errors of the sensor and the linear slideway, a series of computer simulations are carried out. An experiment is conducted for validation, showing good consistency. The new four-sensor method with the improved measurement system provides a new way to measure the straightness and tilt errors of a linear slideway, which can guarantee favorable propagations of the residuals induced by the noise and the positioning errors.展开更多
Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product speeifieation(GPS) requires the measurement uncertainty characterizing the reliability of the resul...Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product speeifieation(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.展开更多
In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can eff...In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model, Furthermore, in the ACE, the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors. The results of daily, decad and monthly prediction experiments on a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction, but is also better than that of the T63 model.展开更多
Due to various reasons, the inspection methods often need to be changed, and the detection reagents often need to be replaced. In this study, a comparative experiment was conducted between the ethanol-based and ether-...Due to various reasons, the inspection methods often need to be changed, and the detection reagents often need to be replaced. In this study, a comparative experiment was conducted between the ethanol-based and ether-based determination methods for oil content in imported wool. The determination results obtained from the two methods were treated as abscissa and ordinate respectively,and their linear relationship was analyzed. According to the linear regression analysis, the conversion equation of determination result between the two methods was obtained. In addition, the repeatability admissible error and reproducibility admissible error were established through analyzing the comparative experimental results by scientific software. This study will bring new ideas for further researches in this field, and provide reference for solving the similar problems in actual inspection work.展开更多
A method to simulate processes of forging and subsequent heat treatment of an axial symmetric rod is formulated in eulerian description and the feasibility is investigated. This method uses finite volume mushes for t...A method to simulate processes of forging and subsequent heat treatment of an axial symmetric rod is formulated in eulerian description and the feasibility is investigated. This method uses finite volume mushes for troching material deformation and an automatically refined facet surface to accurately trace the free surface of the deforming material.In the method,the deforming work piece flows through fixed finite volume meshes using eulerian formulation to describe the conservation laws,Fixed finite volume meshing is particularly suitable for large three-dimensional deformation such as forging because remeshing techniques are not required, which are commonly considered to be the main bottelencek in the ssimulations of large defromation by using the finite element method,By means of this finite volume method, an approach has been developed in the framework of 'metallo-thermo-mechanics' to simulate metallic structure, temperature and stress/strain coupled in the heat treatment process.In a first step of simulation, the heat treatment solver is limited in small deformation hypothesis,and un- coupled with forging. The material is considered as elastic-plastic and takes into account of strain, strain rate and temperature effects on the yield stress.Heat generation due to deformation,heat con- duction and thermal stress are considered.Temperature - dependent phase transformation,stress-in- duced phase transformation,latent heat,transformation stress and strain are included.These ap- proaches are implemented into the commerical commercial computer program MSC/SuperForge and a verification example with experimental date is given as comparison.展开更多
In this work,synchronous cutting of concave and convex surfaces was achieved using the duplex helical method for the hypoid gear,and the problem of tooth surface error correction was studied.First,the mathematical mod...In this work,synchronous cutting of concave and convex surfaces was achieved using the duplex helical method for the hypoid gear,and the problem of tooth surface error correction was studied.First,the mathematical model of the hypoid gears machined by the duplex helical method was established.Second,the coordinates of discrete points on the tooth surface were obtained by measurement center,and the normal errors of the discrete points were calculated.Third,a tooth surface error correction model is established,and the tooth surface error was corrected using the Levenberg-Marquard algorithm with trust region strategy and least square method.Finally,grinding experiments were carried out on the machining parameters obtained by Levenberg-Marquard algorithm with trust region strategy,which had a better effect on tooth surface error correction than the least square method.After the tooth surface error is corrected,the maximum absolute error is reduced from 30.9μm before correction to 6.8μm,the root mean square of the concave error is reduced from 15.1 to 2.1μm,the root mean square of the convex error is reduced from 10.8 to 1.8μm,and the sum of squared errors of the concave and convex surfaces was reduced from 15471 to 358μm^(2).It is verified that the Levenberg-Marquard algorithm with trust region strategy has a good accuracy for the tooth surface error correction of hypoid gear machined by duplex helical method.展开更多
This paper deals with a-posteriori error estimates for piecewise linear finite element approximations of parabolic problems in two space dimensions. The analysis extends previous results for elliptic problems to the p...This paper deals with a-posteriori error estimates for piecewise linear finite element approximations of parabolic problems in two space dimensions. The analysis extends previous results for elliptic problems to the parabolic context.展开更多
The subject of this work is to propose adaptive finite element methods based on an optimal maximum norm error control estimate.Using estimators of the local regularity of the unknown exact solution derived from comput...The subject of this work is to propose adaptive finite element methods based on an optimal maximum norm error control estimate.Using estimators of the local regularity of the unknown exact solution derived from computed approximate solutions,the proposed procedures are analyzed in detail for a non-trivial class of corner problems and shown to be efficient in the sense that they generate the correct type of refinement and lead to the desired control under consideration.展开更多
For the purpose of providing references for further research and practical application about the quality improvement of RCA,in this paper,various treatment methods were firstly classified into four categories:removing...For the purpose of providing references for further research and practical application about the quality improvement of RCA,in this paper,various treatment methods were firstly classified into four categories:removing old mortar (OM),strengthening OM,multi-stage mixing methods,and combination methods.Thereafter,the improvement mechanisms and important conclusions of various treatment methods were elucidated and summarised respectively.In the section of discussion,the improved effects as well as advantages and disadvantages of various treatment methods were compared and discussed respectively,and recommendations for the selection of treatment methods were proposed.Finally,the further research directions were pointed out,and an integrative programme on the quality improvement of RCA was recommended.展开更多
In this paper,we investigate a streamline diffusion finite element approxi- mation scheme for the constrained optimal control problem governed by linear con- vection dominated diffusion equations.We prove the existenc...In this paper,we investigate a streamline diffusion finite element approxi- mation scheme for the constrained optimal control problem governed by linear con- vection dominated diffusion equations.We prove the existence and uniqueness of the discretized scheme.Then a priori and a posteriori error estimates are derived for the state,the co-state and the control.Three numerical examples are presented to illustrate our theoretical results.展开更多
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No.ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos.ZR2022LLZ012 and ZR2021LLZ001)。
文摘Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.
基金supported by the National Science Foundation grant DMS-1818998.
文摘In this paper,numerical experiments are carried out to investigate the impact of penalty parameters in the numerical traces on the resonance errors of high-order multiscale discontinuous Galerkin(DG)methods(Dong et al.in J Sci Comput 66:321–345,2016;Dong and Wang in J Comput Appl Math 380:1–11,2020)for a one-dimensional stationary Schrödinger equation.Previous work showed that penalty parameters were required to be positive in error analysis,but the methods with zero penalty parameters worked fine in numerical simulations on coarse meshes.In this work,by performing extensive numerical experiments,we discover that zero penalty parameters lead to resonance errors in the multiscale DG methods,and taking positive penalty parameters can effectively reduce resonance errors and make the matrix in the global linear system have better condition numbers.
基金Supported by grants from the Nanchong City School Cooperation Project(No.18SXHZ0542)Hubei Chen Xiaoping Science and Technology Development Foundation Project(No.CXPJJH11900002-037)Sichuan Medical Research Youth Innovation Project(No.Q18031).
文摘Objective To explore the differences in three different registration methods of cone beam computed tomography(CBCT)-guided down-regulated intense radiation therapy for lung cancer as well as the effects of tumor location,treatment mode,and tumor size on registration.Methods This retrospective analysis included 80 lung cancer patients undergoing radiotherapy in our hospital from November 2017 to October 2019 and compared automatic bone registration,automatic grayscale(t+r)registration,and automatic grayscale(t)positioning error on the X-,Y-,and Z-axes under three types of registration methods.The patients were also grouped according to tumor position,treatment mode,and tumor size to compare positioning errors.Results On the X-,Y-,and Z-axes,automatic grayscale(t+r)and automatic grayscale(t)registration showed a better trend.Analysis of the different treatment modes showed differences in the three registration methods;however,these were not statistically significant.Analysis according to tumor sizes showed significant differences between the three registration methods(P<0.05).Analysis according to tumor positions showed differences in the X-and Y-axes that were not significant(P>0.05),while the autopsy registration in the Z-axis showed the largest difference in the mediastinal and hilar lymph nodes(P<0.05).Conclusion The treatment mode was not the main factor affecting registration error in lung cancer.Three registration methods are available for tumors in the upper and lower lungs measuring<3 cm;among these,automatic gray registration is recommended,while any gray registration method is recommended for tumors located in the mediastinal hilar site measuring<3 cm and in the upper and lower lungs≥3 cm.
基金Supported by the National Eleventh Five-year Water Major Project(2008ZX07101-007-01)Soft Science Research Project of Jiangsu Province(BR2009003)~~
文摘Based on the characteristics of the domestic sewage in rural district,four sewage treatment methods were analyzed.The results found that the optimum method was to process the domestic sewage on the spot,and it should be popularized and applied in rural area.
文摘The efficacy of error correction and various kinds of correction approaches is one of the key issues in second language writing faced by both teachers and researchers. The current paper reviews the definition of error correction and examines the different views on whether error correction in L2 writing should be corrected. In particular, the paper discusses and analyses the three common correction methods: direct correction, peer feedback and indirect correction. Teachers are encouraged to weigh and analyze the advantages and disadvantages of these methods according to the current literature, employ the most beneficial error correction method in L2 writing, and adapt its suitability to their teaching context.
文摘Usually, Chinese EFL students make satisfactory progress in reading, grammar and writing. However, it is very difficult for them to transmit messages, exchange thoughts in English. I conducted a questionnaire in Inner Mongolia Teachers’ University and found out that when the students come to speak English, they tend to appear frustrated, lack confidence on the grounds of fear of making errors, which may be closely related to the teachers’ attitudes towards students’ errors. Therefore, it is essential for English teachers to have proper attitudes to errors in English teaching. The paper discusses some important strategies on whether, when and how speaking errors should be corrected based on some researchers’ views and on a small scale of experiment for the purpose of helping teachers deal with students’ errors effectively in communication.
基金supported by National Natural Science Foundation of China(Grant Nos. 50775190No.51275425)+2 种基金Spring Sunshine Plan of Ministry of Education of China(Grant No. 10202258)Talent Introduction of Xihua UniversityChina(Grant No. Z1220217)
文摘The influences of machining and misalignment errors play a very critical role in the performance of the anti-backlash double-roller enveloping hourglass worm gear(ADEHWG).However,a corresponding efficient method for eliminating or reducing these errors on the tooth profile of the ADEHWG is seldom reported.The gear engagement equation and tooth profile equation for considering six different errors that could arise from the machining and gear misalignment are derived from the theories of differential geometry and gear meshing.Also,the tooth contact analysis(TCA) is used to systematically investigate the influence of the machining and misalignment errors on the contact curves and the tooth profile by means of numerical analysis and three-dimensional solid modeling.The research results show that vertical angular misalignment of the worm wheel(Δβ) has the strongest influences while the tooth angle error(Δα) has the weakest influences on the contact curves and the tooth profile.A novel efficient approach is proposed and used to minimize the effect of the errors in manufacturing by changing the radius of the grinding wheel and the approaching point of contact.The results from the TCA and the experiment demonstrate that this tooth profile design modification method can indeed reduce the machining and misalignment errors.This modification design method is helpful in understanding the manufacturing technology of the ADEHWG.
基金Supported by the National Natural Science Foundation of China (10601022)Natural Science Foundation of Inner Mongolia Autonomous Region (200607010106)Youth Science Foundation of Inner Mongolia University(ND0702)
文摘An H^1-Galerkin mixed finite element method is discussed for a class of second order SchrSdinger equation. Optimal error estimates of semidiscrete schemes are derived for problems in one space dimension. At the same time, optimal error estimates are derived for fully discrete schemes. And it is showed that the H1-Galerkin mixed finite element approximations have the same rate of convergence as in the classical mixed finite element methods without requiring the LBB consistency condition.
文摘The efficiency of an optimization method for acoustic emission/microseismic(AE/MS) source location is determined by the compatibility of its error definition with the errors contained in the input data.This compatibility can be examined in terms of the distribution of station residuals.For an ideal distribution,the input error is held at the station where it takes place as the station residual and the error is not permitted to spread to other stations.A comparison study of two optimization methods,namely the least squares method and the absolute value method,shows that the distribution with this character constrains the input errors and minimizes their impact,which explains the much more robust performance by the absolute value method in dealing with large and isolated input errors.When the errors in the input data are systematic and/or extreme in that the basic data structure is altered by these errors,none of the optimization methods are able to function.The only means to resolve this problem is the early detection and correction of these errors through a data screening process.An efficient data screening process is of primary importance for AE/MS source location.In addition to its critical role in dealing with those systematic and extreme errors,data screening creates a favorable environment for applying optimization methods.
基金Supported by National Natural Science Foundation of China(Grant No.51435006)
文摘Although there are some multi-sensor methods for measuring the straightness and tilt errors of a linear slideway, they need to be further improved in some aspects, such as suppressing measurement noise and reducing precondition.In this paper, a new four-sensor method with an improved measurement system is proposed to on-machine separate the straightness and tilt errors of a linear slideway from the sensor outputs, considering the influences of the reference surface profile and the zero-adjustment values. The improved system is achieved by adjusting a single sensor to di erent positions. Based on the system, a system of linear equations is built by fusing the sensor outputs to cancel out the e ects of the straightness and tilt errors. Three constraints are then derived and supplemented into the linear system to make the coe cient matrix full rank. To restrain the sensitivity of the solution of the linear system to the measurement noise in the sensor outputs, the Tikhonov regularization method is utilized. After the surface profile is obtained from the solution, the straightness and tilt errors are identified from the sensor outputs. To analyze the e ects of the measurement noise and the positioning errors of the sensor and the linear slideway, a series of computer simulations are carried out. An experiment is conducted for validation, showing good consistency. The new four-sensor method with the improved measurement system provides a new way to measure the straightness and tilt errors of a linear slideway, which can guarantee favorable propagations of the residuals induced by the noise and the positioning errors.
基金supported by National Natural Science Foundation of China (Grant No. 51075198)Jiangsu Provincial Natural Science Foundation of China (Grant No. BK2010479)+2 种基金Innovation Research of Nanjing Institute of Technology, China (Grant No. CKJ20100008)Jiangsu Provincial Foundation of 333 Talents Engineering of ChinaJiangsu Provincial Foundation of Six Talented Peak of China
文摘Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product speeifieation(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.
基金Project supported by the National Natural Science Foundation of China (Grant Nos 40575036 and 40325015).Acknowledgement The authors thank Drs Zhang Pei-Qun and Bao Ming very much for their valuable comments on the present paper.
文摘In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model, Furthermore, in the ACE, the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors. The results of daily, decad and monthly prediction experiments on a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction, but is also better than that of the T63 model.
文摘Due to various reasons, the inspection methods often need to be changed, and the detection reagents often need to be replaced. In this study, a comparative experiment was conducted between the ethanol-based and ether-based determination methods for oil content in imported wool. The determination results obtained from the two methods were treated as abscissa and ordinate respectively,and their linear relationship was analyzed. According to the linear regression analysis, the conversion equation of determination result between the two methods was obtained. In addition, the repeatability admissible error and reproducibility admissible error were established through analyzing the comparative experimental results by scientific software. This study will bring new ideas for further researches in this field, and provide reference for solving the similar problems in actual inspection work.
文摘A method to simulate processes of forging and subsequent heat treatment of an axial symmetric rod is formulated in eulerian description and the feasibility is investigated. This method uses finite volume mushes for troching material deformation and an automatically refined facet surface to accurately trace the free surface of the deforming material.In the method,the deforming work piece flows through fixed finite volume meshes using eulerian formulation to describe the conservation laws,Fixed finite volume meshing is particularly suitable for large three-dimensional deformation such as forging because remeshing techniques are not required, which are commonly considered to be the main bottelencek in the ssimulations of large defromation by using the finite element method,By means of this finite volume method, an approach has been developed in the framework of 'metallo-thermo-mechanics' to simulate metallic structure, temperature and stress/strain coupled in the heat treatment process.In a first step of simulation, the heat treatment solver is limited in small deformation hypothesis,and un- coupled with forging. The material is considered as elastic-plastic and takes into account of strain, strain rate and temperature effects on the yield stress.Heat generation due to deformation,heat con- duction and thermal stress are considered.Temperature - dependent phase transformation,stress-in- duced phase transformation,latent heat,transformation stress and strain are included.These ap- proaches are implemented into the commerical commercial computer program MSC/SuperForge and a verification example with experimental date is given as comparison.
基金Projects(52075552,51575533,51805555,11662004)supported by the National Natural Science Foundation of China。
文摘In this work,synchronous cutting of concave and convex surfaces was achieved using the duplex helical method for the hypoid gear,and the problem of tooth surface error correction was studied.First,the mathematical model of the hypoid gears machined by the duplex helical method was established.Second,the coordinates of discrete points on the tooth surface were obtained by measurement center,and the normal errors of the discrete points were calculated.Third,a tooth surface error correction model is established,and the tooth surface error was corrected using the Levenberg-Marquard algorithm with trust region strategy and least square method.Finally,grinding experiments were carried out on the machining parameters obtained by Levenberg-Marquard algorithm with trust region strategy,which had a better effect on tooth surface error correction than the least square method.After the tooth surface error is corrected,the maximum absolute error is reduced from 30.9μm before correction to 6.8μm,the root mean square of the concave error is reduced from 15.1 to 2.1μm,the root mean square of the convex error is reduced from 10.8 to 1.8μm,and the sum of squared errors of the concave and convex surfaces was reduced from 15471 to 358μm^(2).It is verified that the Levenberg-Marquard algorithm with trust region strategy has a good accuracy for the tooth surface error correction of hypoid gear machined by duplex helical method.
文摘This paper deals with a-posteriori error estimates for piecewise linear finite element approximations of parabolic problems in two space dimensions. The analysis extends previous results for elliptic problems to the parabolic context.
文摘The subject of this work is to propose adaptive finite element methods based on an optimal maximum norm error control estimate.Using estimators of the local regularity of the unknown exact solution derived from computed approximate solutions,the proposed procedures are analyzed in detail for a non-trivial class of corner problems and shown to be efficient in the sense that they generate the correct type of refinement and lead to the desired control under consideration.
基金Funded by Joint Funds of the National Natural Science Foundation of China(No.U1904188)National Science Foundation for Distinguished Young Scholars(No.51608179)the Key Science and Technology Program of Henan Province,China(No.202102310253)。
文摘For the purpose of providing references for further research and practical application about the quality improvement of RCA,in this paper,various treatment methods were firstly classified into four categories:removing old mortar (OM),strengthening OM,multi-stage mixing methods,and combination methods.Thereafter,the improvement mechanisms and important conclusions of various treatment methods were elucidated and summarised respectively.In the section of discussion,the improved effects as well as advantages and disadvantages of various treatment methods were compared and discussed respectively,and recommendations for the selection of treatment methods were proposed.Finally,the further research directions were pointed out,and an integrative programme on the quality improvement of RCA was recommended.
基金supported by the National Basic Research Program under the Grant 2005CB321701the National Natural Science Foundation of China under the Grants 60474027 and 10771211.
文摘In this paper,we investigate a streamline diffusion finite element approxi- mation scheme for the constrained optimal control problem governed by linear con- vection dominated diffusion equations.We prove the existence and uniqueness of the discretized scheme.Then a priori and a posteriori error estimates are derived for the state,the co-state and the control.Three numerical examples are presented to illustrate our theoretical results.