This paper proposes a steady-state errors correction(SSEC)method for eliminating measurement errors.This method is based on the detections of error signal E(s)and output C(s)which generate an expected output R(s).In c...This paper proposes a steady-state errors correction(SSEC)method for eliminating measurement errors.This method is based on the detections of error signal E(s)and output C(s)which generate an expected output R(s).In comparison with the conventional solutions which are based on detecting the expected output R(s)and output C(s)to obtain error signal E(s),the measurement errors are eliminated even the error might be at a significant level.Moreover,it is possible that the individual debugging by regulating the coefficient K for every member of the multiple objectives achieves the optimization of the open loop gain.Therefore,this simple method can be applied to the weak coupling and multiple objectives system,which is usually controlled by complex controller.The principle of eliminating measurement errors is derived analytically,and the advantages comparing with the conventional solutions are depicted.Based on the SSEC method analysis,an application of this method for an active power filter(APF)is investigated and the effectiveness and viability of the scheme are demonstrated through the simulation and experimental verifications.展开更多
A servo control system is prone to low speed and unsteadiness during very-low-frequency follow-up. A design method of feedforward control based on intelligent controller is put foward. Simulation and test results show...A servo control system is prone to low speed and unsteadiness during very-low-frequency follow-up. A design method of feedforward control based on intelligent controller is put foward. Simulation and test results show that the method has excellent control characteristics and strong robustness, which meets the servo control needs with very-low frequency.展开更多
The majority of nonlinear stochastic systems can be expressed as the quasi-Hamiltonian systems in science and engineering. Moreover, the corresponding Hamiltonian system offers two concepts of integrability and resona...The majority of nonlinear stochastic systems can be expressed as the quasi-Hamiltonian systems in science and engineering. Moreover, the corresponding Hamiltonian system offers two concepts of integrability and resonance that can fully describe the global relationship among the degrees-of-freedom(DOFs) of the system. In this work, an effective and promising approximate semi-analytical method is proposed for the steady-state response of multi-dimensional quasi-Hamiltonian systems. To be specific, the trial solution of the reduced Fokker–Plank–Kolmogorov(FPK) equation is obtained by using radial basis function(RBF) neural networks. Then, the residual generated by substituting the trial solution into the reduced FPK equation is considered, and a loss function is constructed by combining random sampling technique. The unknown weight coefficients are optimized by minimizing the loss function through the Lagrange multiplier method. Moreover, an efficient sampling strategy is employed to promote the implementation of algorithms. Finally, two numerical examples are studied in detail, and all the semi-analytical solutions are compared with Monte Carlo simulations(MCS) results. The results indicate that the complex nonlinear dynamic features of the system response can be captured through the proposed scheme accurately.展开更多
We propose an adaptive stencil construction for high-order accurate finite volume schemes a posteriori stabilized devoted to solve one-dimensional steady-state hyperbolic equations.High accuracy(up to the sixth-order ...We propose an adaptive stencil construction for high-order accurate finite volume schemes a posteriori stabilized devoted to solve one-dimensional steady-state hyperbolic equations.High accuracy(up to the sixth-order presently)is achieved,thanks to polynomial recon-structions while stability is provided with an a posteriori MOOD method which controls the cell polynomial degree for eliminating non-physical oscillations in the vicinity of dis-continuities.We supplemented this scheme with a stencil construction allowing to reduce even further the numerical dissipation.The stencil is shifted away from troubles(shocks,discontinuities,etc.)leading to less oscillating polynomial reconstructions.Experimented on linear,Burgers',and Euler equations,we demonstrate that the adaptive stencil technique manages to retrieve smooth solutions with optimal order of accuracy but also irregular ones without spurious oscillations.Moreover,we numerically show that the approach allows to reduce the dissipation still maintaining the essentially non-oscillatory behavior.展开更多
AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was...AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.展开更多
In a magnetohydrodynamic(MHD)driven fluid cell,a plane non-parallel flow in a square domain satisfying a free-slip boundary condition is examined.The energy dissipation of the flow is controlled by the viscosity and l...In a magnetohydrodynamic(MHD)driven fluid cell,a plane non-parallel flow in a square domain satisfying a free-slip boundary condition is examined.The energy dissipation of the flow is controlled by the viscosity and linear friction.The latter arises from the influence of the Hartmann bottom boundary layer in a three-dimensional(3D)MHD experiment in a square bottomed cell.The basic flow in this fluid system is a square eddy flow exhibiting a network of N~2 vortices rotating alternately in clockwise and anticlockwise directions.When N is odd,the instability of the flow gives rise to secondary steady-state flows and secondary time-periodic flows,exhibiting similar characteristics to those observed when N=3.For this reason,this study focuses on the instability of the square eddy flow of nine vortices.It is shown that there exist eight bi-critical values corresponding to the existence of eight neutral eigenfunction spaces.Especially,there exist non-real neutral eigenfunctions,which produce secondary time-periodic flows exhibiting vortices merging in an oscillatory manner.This Hopf bifurcation phenomenon has not been observed in earlier investigations.展开更多
In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in...In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable ...The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.展开更多
Neuropathy is nerve damage that can cause chronic neuropathic pain, which is challenging to cure and has a significant financial burden. Exercise therapies, including High-Intensity Interval Training (HIIT) and steady...Neuropathy is nerve damage that can cause chronic neuropathic pain, which is challenging to cure and has a significant financial burden. Exercise therapies, including High-Intensity Interval Training (HIIT) and steady-state cardio, are being explored as potential treatments for neuropathic pain. This systematic review compares the effectiveness of HIIT and steady-state cardio for improving function in neurological patients. This article provides an overview of the systematic review conducted on the effects of exercise on neuropathic patients, with a focus on high-intensity interval training (HIIT) and steady-state cardio. The authors conducted a comprehensive search of various databases, identified relevant studies based on predetermined inclusion criteria, and used the EPPI automation application to process the data. The final selection of studies was based on validity and relevance, with redundant articles removed. The article reviews four studies that compare high-intensity interval training (HIIT) to moderate-intensity continuous training (MICT) on various health outcomes. The studies found that HIIT can improve aerobic fitness, cerebral blood flow, and brain function in stroke patients;lower diastolic blood pressure more than MICT and improve insulin sensitivity and skeletal muscle mitochondrial content in obese individuals, potentially helping with the prevention and management of type 2 diabetes. In people with multiple sclerosis, acute exercise can decrease the plasma neurofilament light chain while increasing the flow of the kynurenine pathway. The available clinical and preclinical data suggest that further study on high-intensity interval training (HIIT) and its potential to alleviate neuropathic pain is justified. Randomized controlled trials are needed to investigate the type, intensity, frequency, and duration of exercise, which could lead to consensus and specific HIIT-based advice for patients with neuropathies.展开更多
Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracer...Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.展开更多
In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally a...In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.展开更多
In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution de...In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.展开更多
Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school ch...Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school children aged 5 to 15 at CHU-IOTA. Patients and Method: This is a prospective, descriptive cross-sectional study carried out in the ophthalmic-pediatrics department of CHU-IOTA, from October to November 2023. Results: We received 340 school children aged 5 to 15, among whom 111 presented ametropia, i.e. a prevalence of 32.65%. The average age was 11.42 ± 2.75 years and a sex ratio of 0.59. The average visual acuity was 4/10 (range 1/10 and 10/10). We found refractive defects: astigmatism 73.87%, hyperopia 23.87% of cases and myopia 2.25%. The decline in distance visual acuity was the most common functional sign. Ocular abnormalities associated with ametropia were dominated by allergic conjunctivitis (26.13%) and papillary excavation (6.31%) in astigmatics;allergic conjunctivitis (9.01%) and papillary excavation (7.20%) in hyperopic patients;turbid vitreous (0.90%), myopic choroidosis (0.45%) and allergic conjunctivitis (0.45%) in myopes. Conclusion: Refractive errors constitute a reality and a major public health problem among school children.展开更多
An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and pr...An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.展开更多
Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irra...Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.展开更多
The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness...The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.展开更多
Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SM...Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SMS-WARR(Shanghai Meteorological Service-WRF ADAS Rapid Refresh System),are analyzed to quantitatively reveal the relationships between the forecasted surface wind speed errors and terrain features,with the intent of providing clues to better apply the NWP model to complex terrain regions.The terrain features are described by three parameters:the standard deviation of the model grid-scale orography,terrain height error of the model,and slope angle.The results show that the forecast bias has a unimodal distribution with a change in the standard deviation of orography.The minimum ME(the mean value of bias)is 1.2 m s^(-1) when the standard deviation is between 60 and 70 m.A positive correlation exists between bias and terrain height error,with the ME increasing by 10%−30%for every 200 m increase in terrain height error.The ME decreases by 65.6%when slope angle increases from(0.5°−1.5°)to larger than 3.5°for uphill winds but increases by 35.4%when the absolute value of slope angle increases from(0.5°−1.5°)to(2.5°−3.5°)for downhill winds.Several sensitivity experiments are carried out with a model output statistical(MOS)calibration model for surface wind speeds and ME(RMSE)has been reduced by 90%(30%)by introducing terrain parameters,demonstrating the value of this study.展开更多
In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of nor...In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of normalizing constants.It is shown that M_(n)^(p),when v=1,converges to the Frechet extreme value distribution at the rate of 1/n,and if v>1 then M_(n)^(p)converges to the Gumbel extreme value distribution at the rate of(loglogn)^(2)=(log n)^(1-1/v).展开更多
Introduction: WHO estimated that uncorrected refractive errors are the leading cause of visual impairment and second leading cause of blindness globally. University students are prone to developing refractive errors d...Introduction: WHO estimated that uncorrected refractive errors are the leading cause of visual impairment and second leading cause of blindness globally. University students are prone to developing refractive errors due to their curriculum that requires a lot of near work affecting their performance and quality of life unknowingly. Genetic and environmental factors are thought to play a role in the development of refractive errors. This study addresses the paucity of knowledge about refractive errors among university students in East Africa, providing a foundation for further research. Objectives: To determine the prevalence and factors associated with refractive errors among students in the Faculty of Medicine at Mbarara University of Science and Technology. Methodology: This was a cross-sectional descriptive and analytical study in which 368 undergraduate students selected using random sampling were assessed for refractive errors from March 2021-July 2021. Eligible participants were recruited and their VA assessment done after answering a questionnaire. Students whose VA improved on pin hole had subjective retinoscopy and results were compiled and imported to STATA 14 for analysis. Results: The prevalence of refractive errors was 26.36% with (95% CI) among university students especially myopia. Myopia is most predominant at 60%, followed by 37% Astigmatism and hyperopia of 3% among medical students. Astigmatism consisted of largely myopic astigmatism 72% (26) and 28% (10) compound/mixed astigmatism only. Student positive family history of refractive error was found to have a statistically significant relationship with refractive errors with AOR 1.68 (1.04 - 2.72) (95% CI) and P (0.032). Conclusion: The prevalence of refractive errors among university students, especially myopia, was found to be high and family history was associated with students having refractive errors.展开更多
基金National Natural Science Foundation of China(No.61273172)
文摘This paper proposes a steady-state errors correction(SSEC)method for eliminating measurement errors.This method is based on the detections of error signal E(s)and output C(s)which generate an expected output R(s).In comparison with the conventional solutions which are based on detecting the expected output R(s)and output C(s)to obtain error signal E(s),the measurement errors are eliminated even the error might be at a significant level.Moreover,it is possible that the individual debugging by regulating the coefficient K for every member of the multiple objectives achieves the optimization of the open loop gain.Therefore,this simple method can be applied to the weak coupling and multiple objectives system,which is usually controlled by complex controller.The principle of eliminating measurement errors is derived analytically,and the advantages comparing with the conventional solutions are depicted.Based on the SSEC method analysis,an application of this method for an active power filter(APF)is investigated and the effectiveness and viability of the scheme are demonstrated through the simulation and experimental verifications.
基金This project was supported by the Foundation of Ministry of Machine-Building Industry.
文摘A servo control system is prone to low speed and unsteadiness during very-low-frequency follow-up. A design method of feedforward control based on intelligent controller is put foward. Simulation and test results show that the method has excellent control characteristics and strong robustness, which meets the servo control needs with very-low frequency.
基金Project supported by the National Natural Science Foundation of China (Grant No. 12072118)the Natural Science Funds for Distinguished Young Scholar of the Fujian Province, China (Grant No. 2021J06024)the Project for Youth Innovation Fund of Xiamen, China (Grant No. 3502Z20206005)。
文摘The majority of nonlinear stochastic systems can be expressed as the quasi-Hamiltonian systems in science and engineering. Moreover, the corresponding Hamiltonian system offers two concepts of integrability and resonance that can fully describe the global relationship among the degrees-of-freedom(DOFs) of the system. In this work, an effective and promising approximate semi-analytical method is proposed for the steady-state response of multi-dimensional quasi-Hamiltonian systems. To be specific, the trial solution of the reduced Fokker–Plank–Kolmogorov(FPK) equation is obtained by using radial basis function(RBF) neural networks. Then, the residual generated by substituting the trial solution into the reduced FPK equation is considered, and a loss function is constructed by combining random sampling technique. The unknown weight coefficients are optimized by minimizing the loss function through the Lagrange multiplier method. Moreover, an efficient sampling strategy is employed to promote the implementation of algorithms. Finally, two numerical examples are studied in detail, and all the semi-analytical solutions are compared with Monte Carlo simulations(MCS) results. The results indicate that the complex nonlinear dynamic features of the system response can be captured through the proposed scheme accurately.
基金support by FEDER-Fundo Europeu de Desenvolvimento Regional,through COMPETE 2020-Programa Operational Fatores de Competitividade,and the National Funds through FCT-Fundacao para a Ciencia e a Tecnologia,project no.UID/FIS/04650/2019support by FEDER-Fundo Europeu de Desenvolvimento Regional,through COMPETI E 2020-Programa Operacional Fatores de Competitividade,and the National Funds through FCT-Fundacao para a Ciencia e a Tecnologia,project no.POCI-01-0145-FEDER-028118
文摘We propose an adaptive stencil construction for high-order accurate finite volume schemes a posteriori stabilized devoted to solve one-dimensional steady-state hyperbolic equations.High accuracy(up to the sixth-order presently)is achieved,thanks to polynomial recon-structions while stability is provided with an a posteriori MOOD method which controls the cell polynomial degree for eliminating non-physical oscillations in the vicinity of dis-continuities.We supplemented this scheme with a stencil construction allowing to reduce even further the numerical dissipation.The stencil is shifted away from troubles(shocks,discontinuities,etc.)leading to less oscillating polynomial reconstructions.Experimented on linear,Burgers',and Euler equations,we demonstrate that the adaptive stencil technique manages to retrieve smooth solutions with optimal order of accuracy but also irregular ones without spurious oscillations.Moreover,we numerically show that the approach allows to reduce the dissipation still maintaining the essentially non-oscillatory behavior.
文摘AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.
基金Project supported by the National Natural Science Foundation of China(No.11571240)the Shenzhen Natural Science Fund of China(the Stable Support Plan Program No.20220805175116001)。
文摘In a magnetohydrodynamic(MHD)driven fluid cell,a plane non-parallel flow in a square domain satisfying a free-slip boundary condition is examined.The energy dissipation of the flow is controlled by the viscosity and linear friction.The latter arises from the influence of the Hartmann bottom boundary layer in a three-dimensional(3D)MHD experiment in a square bottomed cell.The basic flow in this fluid system is a square eddy flow exhibiting a network of N~2 vortices rotating alternately in clockwise and anticlockwise directions.When N is odd,the instability of the flow gives rise to secondary steady-state flows and secondary time-periodic flows,exhibiting similar characteristics to those observed when N=3.For this reason,this study focuses on the instability of the square eddy flow of nine vortices.It is shown that there exist eight bi-critical values corresponding to the existence of eight neutral eigenfunction spaces.Especially,there exist non-real neutral eigenfunctions,which produce secondary time-periodic flows exhibiting vortices merging in an oscillatory manner.This Hopf bifurcation phenomenon has not been observed in earlier investigations.
基金supported by the National Natural Science Foundation of China(61601147)the Beijing Natural Science Foundation(L182032)。
文摘In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the Interdisciplinary Innovation Fund of Natural Science,Nanchang University(Grant No.9167-28220007-YB2107).
文摘The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.
文摘Neuropathy is nerve damage that can cause chronic neuropathic pain, which is challenging to cure and has a significant financial burden. Exercise therapies, including High-Intensity Interval Training (HIIT) and steady-state cardio, are being explored as potential treatments for neuropathic pain. This systematic review compares the effectiveness of HIIT and steady-state cardio for improving function in neurological patients. This article provides an overview of the systematic review conducted on the effects of exercise on neuropathic patients, with a focus on high-intensity interval training (HIIT) and steady-state cardio. The authors conducted a comprehensive search of various databases, identified relevant studies based on predetermined inclusion criteria, and used the EPPI automation application to process the data. The final selection of studies was based on validity and relevance, with redundant articles removed. The article reviews four studies that compare high-intensity interval training (HIIT) to moderate-intensity continuous training (MICT) on various health outcomes. The studies found that HIIT can improve aerobic fitness, cerebral blood flow, and brain function in stroke patients;lower diastolic blood pressure more than MICT and improve insulin sensitivity and skeletal muscle mitochondrial content in obese individuals, potentially helping with the prevention and management of type 2 diabetes. In people with multiple sclerosis, acute exercise can decrease the plasma neurofilament light chain while increasing the flow of the kynurenine pathway. The available clinical and preclinical data suggest that further study on high-intensity interval training (HIIT) and its potential to alleviate neuropathic pain is justified. Randomized controlled trials are needed to investigate the type, intensity, frequency, and duration of exercise, which could lead to consensus and specific HIIT-based advice for patients with neuropathies.
基金Supported by Natural Science Foundation of Shaanxi Province of China(Grant No.2021JM010)Suzhou Municipal Natural Science Foundation of China(Grant Nos.SYG202018,SYG202134).
文摘Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.
基金supported in part by the National Key R&D Program of China(2022YFC3401303)the Natural Science Foundation of Jiangsu Province(BK20211528)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KFCX22_2300)。
文摘In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.
文摘In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.
文摘Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school children aged 5 to 15 at CHU-IOTA. Patients and Method: This is a prospective, descriptive cross-sectional study carried out in the ophthalmic-pediatrics department of CHU-IOTA, from October to November 2023. Results: We received 340 school children aged 5 to 15, among whom 111 presented ametropia, i.e. a prevalence of 32.65%. The average age was 11.42 ± 2.75 years and a sex ratio of 0.59. The average visual acuity was 4/10 (range 1/10 and 10/10). We found refractive defects: astigmatism 73.87%, hyperopia 23.87% of cases and myopia 2.25%. The decline in distance visual acuity was the most common functional sign. Ocular abnormalities associated with ametropia were dominated by allergic conjunctivitis (26.13%) and papillary excavation (6.31%) in astigmatics;allergic conjunctivitis (9.01%) and papillary excavation (7.20%) in hyperopic patients;turbid vitreous (0.90%), myopic choroidosis (0.45%) and allergic conjunctivitis (0.45%) in myopes. Conclusion: Refractive errors constitute a reality and a major public health problem among school children.
基金Project supported by the National Natural Science Foundation of China (Grant No.51821005)。
文摘An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.
文摘Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.
文摘The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.
基金supported by the National Natural Science Foundation of China(No.U2142206).
文摘Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SMS-WARR(Shanghai Meteorological Service-WRF ADAS Rapid Refresh System),are analyzed to quantitatively reveal the relationships between the forecasted surface wind speed errors and terrain features,with the intent of providing clues to better apply the NWP model to complex terrain regions.The terrain features are described by three parameters:the standard deviation of the model grid-scale orography,terrain height error of the model,and slope angle.The results show that the forecast bias has a unimodal distribution with a change in the standard deviation of orography.The minimum ME(the mean value of bias)is 1.2 m s^(-1) when the standard deviation is between 60 and 70 m.A positive correlation exists between bias and terrain height error,with the ME increasing by 10%−30%for every 200 m increase in terrain height error.The ME decreases by 65.6%when slope angle increases from(0.5°−1.5°)to larger than 3.5°for uphill winds but increases by 35.4%when the absolute value of slope angle increases from(0.5°−1.5°)to(2.5°−3.5°)for downhill winds.Several sensitivity experiments are carried out with a model output statistical(MOS)calibration model for surface wind speeds and ME(RMSE)has been reduced by 90%(30%)by introducing terrain parameters,demonstrating the value of this study.
文摘In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of normalizing constants.It is shown that M_(n)^(p),when v=1,converges to the Frechet extreme value distribution at the rate of 1/n,and if v>1 then M_(n)^(p)converges to the Gumbel extreme value distribution at the rate of(loglogn)^(2)=(log n)^(1-1/v).
文摘Introduction: WHO estimated that uncorrected refractive errors are the leading cause of visual impairment and second leading cause of blindness globally. University students are prone to developing refractive errors due to their curriculum that requires a lot of near work affecting their performance and quality of life unknowingly. Genetic and environmental factors are thought to play a role in the development of refractive errors. This study addresses the paucity of knowledge about refractive errors among university students in East Africa, providing a foundation for further research. Objectives: To determine the prevalence and factors associated with refractive errors among students in the Faculty of Medicine at Mbarara University of Science and Technology. Methodology: This was a cross-sectional descriptive and analytical study in which 368 undergraduate students selected using random sampling were assessed for refractive errors from March 2021-July 2021. Eligible participants were recruited and their VA assessment done after answering a questionnaire. Students whose VA improved on pin hole had subjective retinoscopy and results were compiled and imported to STATA 14 for analysis. Results: The prevalence of refractive errors was 26.36% with (95% CI) among university students especially myopia. Myopia is most predominant at 60%, followed by 37% Astigmatism and hyperopia of 3% among medical students. Astigmatism consisted of largely myopic astigmatism 72% (26) and 28% (10) compound/mixed astigmatism only. Student positive family history of refractive error was found to have a statistically significant relationship with refractive errors with AOR 1.68 (1.04 - 2.72) (95% CI) and P (0.032). Conclusion: The prevalence of refractive errors among university students, especially myopia, was found to be high and family history was associated with students having refractive errors.