To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregress...To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregressive filter used in this study has been attempted to replace the traditional first-order recursive filter used in spatial multi-scale recursive filter(SMRF)method.The experimental results indicate that the MSRF scheme successfully extracts various scale information resolved by observations.Moreover,compared with the SMRF scheme,the MSRF scheme improves computational accuracy and efficiency to some extent.The MSRF scheme can not only propagate to a longer distance without the attenuation of innovation,but also reduce the mean absolute deviation between the reconstructed sea ice concentration results and observations reduced by about 3.2%compared to the SMRF scheme.On the other hand,compared with traditional first-order recursive filters using in the SMRF scheme that multiple filters are executed,the MSRF scheme only needs to perform two filter processes in one iteration,greatly improving filtering efficiency.In the two-dimensional experiment of sea ice concentration,the calculation time of the MSRF scheme is only 1/7 of that of SMRF scheme.This means that the MSRF scheme can achieve better performance with less computational cost,which is of great significance for further application in real-time ocean or sea ice data assimilation systems in the future.展开更多
In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order ...In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order regular variation condition.展开更多
Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for ...Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for the forced vibration when the damping effect and the coupling effect of multiple second-order models are considered.In this paper, Green's function method based on the Laplace transform is used to obtain closed-form solutions for the forced vibration of second-order axially moving systems. By taking the axially moving damping string system and multi-string system connected by springs as examples, the detailed solution methods and the analytical Green's functions of these second-order systems are given. The mode functions and frequency equations are also obtained by the obtained Green's functions. The reliability and convenience of the results are verified by several examples. This paper provides a systematic analytical method for the dynamic analysis of second-order axially moving systems, and the obtained Green's functions are applicable to different second-order systems rather than just string systems. In addition, the work of this paper also has positive significance for the study on the forced vibration of high-order systems.展开更多
AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was...AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.展开更多
In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in...In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable ...The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.展开更多
This study is concerned with the three-dimensional(3D)stagnation-point for the mixed convection flow past a vertical surface considering the first-order and secondorder velocity slips.To the authors’knowledge,this is...This study is concerned with the three-dimensional(3D)stagnation-point for the mixed convection flow past a vertical surface considering the first-order and secondorder velocity slips.To the authors’knowledge,this is the first study presenting this very interesting analysis.Nonlinear partial differential equations for the flow problem are transformed into nonlinear ordinary differential equations(ODEs)by using appropriate similarity transformation.These ODEs with the corresponding boundary conditions are numerically solved by utilizing the bvp4c solver in MATLAB programming language.The effects of the governing parameters on the non-dimensional velocity profiles,temperature profiles,skin friction coefficients,and the local Nusselt number are presented in detail through a series of graphs and tables.Interestingly,it is reported that the reduced skin friction coefficient decreases for the assisting flow situation and increases for the opposing flow situation.The numerical computations of the present work are compared with those from other research available in specific situations,and an excellent consensus is observed.Another exciting feature for this work is the existence of dual solutions.An important remark is that the dual solutions exist for both assisting and opposing flows.A linear stability analysis is performed showing that one solution is stable and the other solution is not stable.We notice that the mixed convection and velocity slip parameters have strong effects on the flow characteristics.These effects are depicted in graphs and discussed in this paper.The obtained results show that the first-order and second-order slip parameters have a considerable effect on the flow,as well as on the heat transfer characteristics.展开更多
In this paper, we define some new sets of non-elementary functions in a group of solutions x(t) that are sine and cosine to the upper limit of integration in a non-elementary integral that can be arbitrary. We are usi...In this paper, we define some new sets of non-elementary functions in a group of solutions x(t) that are sine and cosine to the upper limit of integration in a non-elementary integral that can be arbitrary. We are using Abel’s methods, described by Armitage and Eberlein. The key is to start with a non-elementary integral function, differentiating and inverting, and then define a set of three functions that belong together. Differentiating these functions twice gives second-order nonlinear ODEs that have the defined set of functions as solutions. We will study some of the second-order nonlinear ODEs, especially those that exhibit limit cycles. Using the methods described in this paper, it is possible to define many other sets of non-elementary functions that are giving solutions to some second-order nonlinear autonomous ODEs.展开更多
Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those s...Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those synchronized data measurements are extracted in the form of amplitude and phase from various locations of the power grid to monitor and control the power system condition.A PMU device is a crucial part of the power equipment in terms of the cost and operative point of view.However,such ongoing development and improvement to PMUs’principal work are essential to the network operators to enhance the grid quality and the operating expenses.This paper introduces a proposed method that led to lowcost and less complex techniques to optimize the performance of PMU using Second-Order Kalman Filter.It is based on the Asyncrhophasor technique resulting in a phase error minimization when receiving the signal from an access point or from the main access point.The MATLAB model has been created to implement the proposed method in the presence of Gaussian and non-Gaussian.The results have shown the proposed method which is Second-Order Kalman Filter outperforms the existing model.The results were tested usingMean Square Error(MSE).The proposed Second-Order Kalman Filter method has been replaced with a synchronization unit into thePMUstructure to clarify the significance of the proposed new PMU.展开更多
Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracer...Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.展开更多
In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally a...In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.展开更多
In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution de...In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.展开更多
AIM:To describe the distribution of refractive errors by age and sex among schoolchildren in Soacha,Colombia.METHODS:This was an observational cross-sectional study conducted in five urban public schools in the munici...AIM:To describe the distribution of refractive errors by age and sex among schoolchildren in Soacha,Colombia.METHODS:This was an observational cross-sectional study conducted in five urban public schools in the municipality of Soacha.A total of 1161 school-aged and pre-adolescent children,aged 5-12y were examined during the school year 2021-2022.Examinations included visual acuity and static refraction.Spherical equivalent(SE)was analysed as follows:myopia SE≤-0.50 D and uncorrected visual acuity of 20/25 or worse;high myopia SE≤-6.00 D;hyperopia SE≥+1.00 D(≥7y)or SE≥+2.00 D(5-6y);significant hyperopia SE≥+3.00 D.Astigmatism was defined as a cylinder in at least one eye≥1.00 D(≥7y)or≥1.75 D(5-6y).If at least one eye was ametropic,children were classified according to the refractive error found.RESULTS:Of the 1139 schoolchildren included,50.6%were male,58.8%were aged between 5 and 9y,and 12.1%were already using optical correction.The most common refractive error was astigmatism(31.1%),followed by myopia(20.8%)and hyperopia(13.1%).There was no significant relationship between refractive error and sex.There was a significant increase in astigmatism(P<0.001)and myopia(P<0.0001)with age.CONCLUSION:Astigmatism is the most common refractive error in children in an urban area of Colombia.Emmetropia decreased and myopia increased with age.展开更多
Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school ch...Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school children aged 5 to 15 at CHU-IOTA. Patients and Method: This is a prospective, descriptive cross-sectional study carried out in the ophthalmic-pediatrics department of CHU-IOTA, from October to November 2023. Results: We received 340 school children aged 5 to 15, among whom 111 presented ametropia, i.e. a prevalence of 32.65%. The average age was 11.42 ± 2.75 years and a sex ratio of 0.59. The average visual acuity was 4/10 (range 1/10 and 10/10). We found refractive defects: astigmatism 73.87%, hyperopia 23.87% of cases and myopia 2.25%. The decline in distance visual acuity was the most common functional sign. Ocular abnormalities associated with ametropia were dominated by allergic conjunctivitis (26.13%) and papillary excavation (6.31%) in astigmatics;allergic conjunctivitis (9.01%) and papillary excavation (7.20%) in hyperopic patients;turbid vitreous (0.90%), myopic choroidosis (0.45%) and allergic conjunctivitis (0.45%) in myopes. Conclusion: Refractive errors constitute a reality and a major public health problem among school children.展开更多
An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and pr...An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.展开更多
Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irra...Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.展开更多
This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this met...This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses.展开更多
This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and par...This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses.展开更多
The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness...The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.展开更多
基金The National Key Research and Development Program of China under contract No.2023YFC3107701the National Natural Science Foundation of China under contract No.42375143.
文摘To effectively extract multi-scale information from observation data and improve computational efficiency,a multi-scale second-order autoregressive recursive filter(MSRF)method is designed.The second-order autoregressive filter used in this study has been attempted to replace the traditional first-order recursive filter used in spatial multi-scale recursive filter(SMRF)method.The experimental results indicate that the MSRF scheme successfully extracts various scale information resolved by observations.Moreover,compared with the SMRF scheme,the MSRF scheme improves computational accuracy and efficiency to some extent.The MSRF scheme can not only propagate to a longer distance without the attenuation of innovation,but also reduce the mean absolute deviation between the reconstructed sea ice concentration results and observations reduced by about 3.2%compared to the SMRF scheme.On the other hand,compared with traditional first-order recursive filters using in the SMRF scheme that multiple filters are executed,the MSRF scheme only needs to perform two filter processes in one iteration,greatly improving filtering efficiency.In the two-dimensional experiment of sea ice concentration,the calculation time of the MSRF scheme is only 1/7 of that of SMRF scheme.This means that the MSRF scheme can achieve better performance with less computational cost,which is of great significance for further application in real-time ocean or sea ice data assimilation systems in the future.
文摘In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order regular variation condition.
基金Project supported by the National Natural Science Foundation of China (No. 12272323)。
文摘Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for the forced vibration when the damping effect and the coupling effect of multiple second-order models are considered.In this paper, Green's function method based on the Laplace transform is used to obtain closed-form solutions for the forced vibration of second-order axially moving systems. By taking the axially moving damping string system and multi-string system connected by springs as examples, the detailed solution methods and the analytical Green's functions of these second-order systems are given. The mode functions and frequency equations are also obtained by the obtained Green's functions. The reliability and convenience of the results are verified by several examples. This paper provides a systematic analytical method for the dynamic analysis of second-order axially moving systems, and the obtained Green's functions are applicable to different second-order systems rather than just string systems. In addition, the work of this paper also has positive significance for the study on the forced vibration of high-order systems.
文摘AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.
基金supported by the National Natural Science Foundation of China(61601147)the Beijing Natural Science Foundation(L182032)。
文摘In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the Interdisciplinary Innovation Fund of Natural Science,Nanchang University(Grant No.9167-28220007-YB2107).
文摘The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.
基金Project supported by the Executive Agency for Higher Education Research Development and Innovation Funding of Romania(No.PN-III-P4-PCE-2021-0993)。
文摘This study is concerned with the three-dimensional(3D)stagnation-point for the mixed convection flow past a vertical surface considering the first-order and secondorder velocity slips.To the authors’knowledge,this is the first study presenting this very interesting analysis.Nonlinear partial differential equations for the flow problem are transformed into nonlinear ordinary differential equations(ODEs)by using appropriate similarity transformation.These ODEs with the corresponding boundary conditions are numerically solved by utilizing the bvp4c solver in MATLAB programming language.The effects of the governing parameters on the non-dimensional velocity profiles,temperature profiles,skin friction coefficients,and the local Nusselt number are presented in detail through a series of graphs and tables.Interestingly,it is reported that the reduced skin friction coefficient decreases for the assisting flow situation and increases for the opposing flow situation.The numerical computations of the present work are compared with those from other research available in specific situations,and an excellent consensus is observed.Another exciting feature for this work is the existence of dual solutions.An important remark is that the dual solutions exist for both assisting and opposing flows.A linear stability analysis is performed showing that one solution is stable and the other solution is not stable.We notice that the mixed convection and velocity slip parameters have strong effects on the flow characteristics.These effects are depicted in graphs and discussed in this paper.The obtained results show that the first-order and second-order slip parameters have a considerable effect on the flow,as well as on the heat transfer characteristics.
文摘In this paper, we define some new sets of non-elementary functions in a group of solutions x(t) that are sine and cosine to the upper limit of integration in a non-elementary integral that can be arbitrary. We are using Abel’s methods, described by Armitage and Eberlein. The key is to start with a non-elementary integral function, differentiating and inverting, and then define a set of three functions that belong together. Differentiating these functions twice gives second-order nonlinear ODEs that have the defined set of functions as solutions. We will study some of the second-order nonlinear ODEs, especially those that exhibit limit cycles. Using the methods described in this paper, it is possible to define many other sets of non-elementary functions that are giving solutions to some second-order nonlinear autonomous ODEs.
文摘Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those synchronized data measurements are extracted in the form of amplitude and phase from various locations of the power grid to monitor and control the power system condition.A PMU device is a crucial part of the power equipment in terms of the cost and operative point of view.However,such ongoing development and improvement to PMUs’principal work are essential to the network operators to enhance the grid quality and the operating expenses.This paper introduces a proposed method that led to lowcost and less complex techniques to optimize the performance of PMU using Second-Order Kalman Filter.It is based on the Asyncrhophasor technique resulting in a phase error minimization when receiving the signal from an access point or from the main access point.The MATLAB model has been created to implement the proposed method in the presence of Gaussian and non-Gaussian.The results have shown the proposed method which is Second-Order Kalman Filter outperforms the existing model.The results were tested usingMean Square Error(MSE).The proposed Second-Order Kalman Filter method has been replaced with a synchronization unit into thePMUstructure to clarify the significance of the proposed new PMU.
基金Supported by Natural Science Foundation of Shaanxi Province of China(Grant No.2021JM010)Suzhou Municipal Natural Science Foundation of China(Grant Nos.SYG202018,SYG202134).
文摘Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.
基金supported in part by the National Key R&D Program of China(2022YFC3401303)the Natural Science Foundation of Jiangsu Province(BK20211528)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KFCX22_2300)。
文摘In the era of exponential growth of data availability,the architecture of systems has a trend toward high dimensionality,and directly exploiting holistic information for state inference is not always computationally affordable.This paper proposes a novel Bayesian filtering algorithm that considers algorithmic computational cost and estimation accuracy for high-dimensional linear systems.The high-dimensional state vector is divided into several blocks to save computation resources by avoiding the calculation of error covariance with immense dimensions.After that,two sequential states are estimated simultaneously by introducing an auxiliary variable in the new probability space,mitigating the performance degradation caused by state segmentation.Moreover,the computational cost and error covariance of the proposed algorithm are analyzed analytically to show its distinct features compared with several existing methods.Simulation results illustrate that the proposed Bayesian filtering can maintain a higher estimation accuracy with reasonable computational cost when applied to high-dimensional linear systems.
文摘In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.
基金Supported by the OneSight EssilorLuxottica Foundation.
文摘AIM:To describe the distribution of refractive errors by age and sex among schoolchildren in Soacha,Colombia.METHODS:This was an observational cross-sectional study conducted in five urban public schools in the municipality of Soacha.A total of 1161 school-aged and pre-adolescent children,aged 5-12y were examined during the school year 2021-2022.Examinations included visual acuity and static refraction.Spherical equivalent(SE)was analysed as follows:myopia SE≤-0.50 D and uncorrected visual acuity of 20/25 or worse;high myopia SE≤-6.00 D;hyperopia SE≥+1.00 D(≥7y)or SE≥+2.00 D(5-6y);significant hyperopia SE≥+3.00 D.Astigmatism was defined as a cylinder in at least one eye≥1.00 D(≥7y)or≥1.75 D(5-6y).If at least one eye was ametropic,children were classified according to the refractive error found.RESULTS:Of the 1139 schoolchildren included,50.6%were male,58.8%were aged between 5 and 9y,and 12.1%were already using optical correction.The most common refractive error was astigmatism(31.1%),followed by myopia(20.8%)and hyperopia(13.1%).There was no significant relationship between refractive error and sex.There was a significant increase in astigmatism(P<0.001)and myopia(P<0.0001)with age.CONCLUSION:Astigmatism is the most common refractive error in children in an urban area of Colombia.Emmetropia decreased and myopia increased with age.
文摘Introduction: Undetected refractive errors constitute a health problem among school children who cannot take advantage of educational opportunities. The authors studied the prevalence of refractive errors in school children aged 5 to 15 at CHU-IOTA. Patients and Method: This is a prospective, descriptive cross-sectional study carried out in the ophthalmic-pediatrics department of CHU-IOTA, from October to November 2023. Results: We received 340 school children aged 5 to 15, among whom 111 presented ametropia, i.e. a prevalence of 32.65%. The average age was 11.42 ± 2.75 years and a sex ratio of 0.59. The average visual acuity was 4/10 (range 1/10 and 10/10). We found refractive defects: astigmatism 73.87%, hyperopia 23.87% of cases and myopia 2.25%. The decline in distance visual acuity was the most common functional sign. Ocular abnormalities associated with ametropia were dominated by allergic conjunctivitis (26.13%) and papillary excavation (6.31%) in astigmatics;allergic conjunctivitis (9.01%) and papillary excavation (7.20%) in hyperopic patients;turbid vitreous (0.90%), myopic choroidosis (0.45%) and allergic conjunctivitis (0.45%) in myopes. Conclusion: Refractive errors constitute a reality and a major public health problem among school children.
基金Project supported by the National Natural Science Foundation of China (Grant No.51821005)。
文摘An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.
文摘Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.
文摘This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses.
文摘This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses.
文摘The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.