Sometimes boundary value problems have isolated regions where the solution changes rapidly.Therefore,when solving numerically,one needs a fine grid to capture the high activity.The fine grid can be implemented as a co...Sometimes boundary value problems have isolated regions where the solution changes rapidly.Therefore,when solving numerically,one needs a fine grid to capture the high activity.The fine grid can be implemented as a composite coarse-fine grid or as a global fine grid.One cheaper way of obtaining the composite grid solution is the use of the local defect correction technique.The technique is an algorithm that combines a global coarse grid solution and a local fine grid solution in an iterative way to estimate the solution on the corresponding composite grid.The algorithm is relatively new and its convergence properties have not been studied for the boundary element method.In this paper the objective is to determine convergence properties of the algorithm for the boundary element method.First,we formulate the algorithm as a fixed point iterative scheme,which has also not been done before for the boundary element method,and then study the properties of the iteration matrix.Results show that we can always expect convergence.Therefore,the algorithm opens up a real alternative for application in the boundary element method for problems with localised regions of high activity.展开更多
Suspended particulate matter (SPM) is regarded as an energy source and a water quality indicator in coastal and marine ecosystems. To estimate SPM from ocean color sensors and land observing satellites, an accurate an...Suspended particulate matter (SPM) is regarded as an energy source and a water quality indicator in coastal and marine ecosystems. To estimate SPM from ocean color sensors and land observing satellites, an accurate and robust atmospheric correction must be done. We evaluated the capabilities of ocean color and land observing satellite for estimation of SPM concentrations over Louisiana continental shelf in the northern Gulf of Mexico, using the Operational Land Imager (OLI) on Landsat-8, and Moderate Resolution Imaging Spectroradiometer (MODIS) on Aqua. In high turbidity waters, the traditional atmospheric correction algorithms based on near-infrared (NIR) bands underestimate SPM concentrations due to the inaccurate removal of the aerosol contribution to the top of atmosphere signals. Therefore, atmospheric correction in high turbidity waters is a challenge. Four atmospheric correction algorithms were implemented on remote sensing reflectance (Rrs) values to select suitable atmospheric correction algorithms for each sensor in our study area. We evaluated short-wave infrared (SWIR) and NIR atmospheric correction algorithms on Rrs products from Landsat-8 OLI and Management Unit of the North Sea Mathematical Models (MUMM) and SWIR.NIR atmospheric correction algorithms on Rrs products from MODIS-Aqua. SPM was retrieved from a band-ratio SPM-retrieval algorithm for each sensor. Our results indicated that SWIR atmospheric correction algorithm was the suitable algorithm for Landsat-8 OLI and SWIR.NIR atmospheric correction algorithm outperformed MUMM algorithm for MODIS.展开更多
In this paper, we present two new algorithms in residue number systems for scaling and error correction. The first algorithm is the Cyclic Property of Residue-Digit Difference (CPRDD). It is used to speed up the resid...In this paper, we present two new algorithms in residue number systems for scaling and error correction. The first algorithm is the Cyclic Property of Residue-Digit Difference (CPRDD). It is used to speed up the residue multiple error correction due to its parallel processes. The second is called the Target Race Distance (TRD). It is used to speed up residue scaling. Both of these two algorithms are used without the need for Mixed Radix Conversion (MRC) or Chinese Residue Theorem (CRT) techniques, which are time consuming and require hardware complexity. Furthermore, the residue scaling can be performed in parallel for any combination of moduli set members without using lookup tables.展开更多
This paper presents a neighborhood optimal trajectory online correction algorithm considering terminal time variation,and investigates its application range.Firstly,the motion model of midcourse guidance is establishe...This paper presents a neighborhood optimal trajectory online correction algorithm considering terminal time variation,and investigates its application range.Firstly,the motion model of midcourse guidance is established,and the online trajectory correction-regenerating strategy is introduced.Secondly,based on the neighborhood optimal control theory,a neighborhood optimal trajectory online correction algorithm considering the terminal time variation is proposed by adding the consideration of terminal time variation to the traditional neighborhood optimal trajectory correction method.Thirdly,the Monte Carlo simulation method is used to analyze the application range of the algorithm,which provides a basis for the division of application domain of the online correction algorithm and the online regeneration algorithm of midcourse guidance trajectory.Finally,the simulation results show that the algorithm has high real-time performance,and the online correction trajectory can meet the requirements of terminal constraint change.The application range of the algorithm is obtained through Monte Carlo simulation.展开更多
Objective:Accurate measurement of QT interval,the ventricular action potential from depolarization to repolarization,is important for the early detection of Long QT syndrome.The most effective QT correction(QTc)formul...Objective:Accurate measurement of QT interval,the ventricular action potential from depolarization to repolarization,is important for the early detection of Long QT syndrome.The most effective QT correction(QTc)formula has yet to be determined in the pediatric population,although it has intrinsically greater extremes in heart rate(HR)and is more susceptible to errors in measurement.The authors of this study compare six dif-ferent QTc methods(Bazett,Fridericia,Framingham,Hodges,Rautaharju,and a computer algorithm utilizing the Bazett formula)for consistency against variations in HR and RR interval.Methods:Descriptive Retrospective Study.We included participants from a pediatric cardiology practice of a community hospital who had an ECG performed in 2017.All participants were healthy patients with no past medical history and no regular med-ications.Results:ECGs from 95 participants from one month to 21 years of age(mean 9.7 years)were included with a mean HR of 91 beats per minute(bpm).The two-sample paired t-test or Wilcoxon signed-rank test assessed for any difference between QTc methods.A statistically significant difference was observed between every combination of two QTc formulae.The Spearman’s rank correlation analysis explored the QTc/HR and QTc/RR relationships for each formula.Fridericia method was most independent of HR and RR with the lowest absolute value of correlation coefficients.Bazett and Computer had moderate correlations,while Framingham and Rautaharju exhibited strong correlations.Correlations were positive for Bazett and Computer,reflecting results from prior studies demonstrating an over-correction of Bazett at higher HRs.In the linear QTc/HR regression analysis,Bazett had the slope closest to zero,although Computer,Hodges,and Fridericia had comparable values.Alternatively,Fridericia had the linear QTc/RR regression coefficient closest to zero.The Bland-Altman method assessed for bias and the limits of agreement between correction formulae.Bazett and Computer exhibited good agreement with minimal bias along with Framingham and Rautaharju.To account for a possible skewed distri-bution of QT,all the above analyses were also performed excluding the top and bottom 2%of data as sorted by heart rate ranges(N=90).Results from this data set were consistent with those derived from all participants(N=95).Conclusions:Overall,the Fridericia correction method provided the best rate correction in our pedia-tric study cohort.展开更多
In high-altitude nuclear detonations,the proportion of pulsed X-ray energy can exceed 70%,making it a specific monitoring signal for such events.These pulsed X-rays can be captured using a satellite-borne X-ray detect...In high-altitude nuclear detonations,the proportion of pulsed X-ray energy can exceed 70%,making it a specific monitoring signal for such events.These pulsed X-rays can be captured using a satellite-borne X-ray detector following atmospheric transmission.To quantitatively analyze the effects of different satellite detection altitudes,burst heights,and transmission angles on the physical processes of X-ray transport and energy fluence,we developed an atmospheric transmission algorithm for pulsed X-rays from high-altitude nuclear detonations based on scattering correction.The proposed method is an improvement over the traditional analytical method that only computes direct-transmission X-rays.The traditional analytical method exhibits a maximum relative error of 67.79% compared with the Monte Carlo method.Our improved method reduces this error to within 10% under the same conditions,even reaching 1% in certain scenarios.Moreover,its computation time is 48,000 times faster than that of the Monte Carlo method.These results have important theoretical significance and engineering application value for designing satellite-borne nuclear detonation pulsed X-ray detectors,inverting nuclear detonation source terms,and assessing ionospheric effects.展开更多
First of all it is necessary to point out that 'reciting' is the wrong term for what Chinese students are often asked to do when they are learning English. The correct terms are, 'learning by heart'... First of all it is necessary to point out that 'reciting' is the wrong term for what Chinese students are often asked to do when they are learning English. The correct terms are, 'learning by heart' or 'rote learning'. In this article the term 'rote learning' will be used.……展开更多
In this study,our aim is to address the problem of gene selection by proposing a hybrid bio-inspired evolutionary algorithm that combines Grey Wolf Optimization(GWO)with Harris Hawks Optimization(HHO)for feature selec...In this study,our aim is to address the problem of gene selection by proposing a hybrid bio-inspired evolutionary algorithm that combines Grey Wolf Optimization(GWO)with Harris Hawks Optimization(HHO)for feature selection.Themotivation for utilizingGWOandHHOstems fromtheir bio-inspired nature and their demonstrated success in optimization problems.We aimto leverage the strengths of these algorithms to enhance the effectiveness of feature selection in microarray-based cancer classification.We selected leave-one-out cross-validation(LOOCV)to evaluate the performance of both two widely used classifiers,k-nearest neighbors(KNN)and support vector machine(SVM),on high-dimensional cancer microarray data.The proposed method is extensively tested on six publicly available cancer microarray datasets,and a comprehensive comparison with recently published methods is conducted.Our hybrid algorithm demonstrates its effectiveness in improving classification performance,Surpassing alternative approaches in terms of precision.The outcomes confirm the capability of our method to substantially improve both the precision and efficiency of cancer classification,thereby advancing the development ofmore efficient treatment strategies.The proposed hybridmethod offers a promising solution to the gene selection problem in microarray-based cancer classification.It improves the accuracy and efficiency of cancer diagnosis and treatment,and its superior performance compared to other methods highlights its potential applicability in realworld cancer classification tasks.By harnessing the complementary search mechanisms of GWO and HHO,we leverage their bio-inspired behavior to identify informative genes relevant to cancer diagnosis and treatment.展开更多
The ultrasonovision image caused by the tool eccentricity can often present two pieces of vertical black strips in the Casing Well. To solve this problem, this paper proposes a correction algorithm of time eccentricit...The ultrasonovision image caused by the tool eccentricity can often present two pieces of vertical black strips in the Casing Well. To solve this problem, this paper proposes a correction algorithm of time eccentricity image based on ellipse fitting algorithm. This algorithm firstly utilizes borehole diameter data to fit ellipse and compute ellipse’s center, major axis, minor axis and inclination angle and other parameters, and then uses these parameters to correct eccentrical ultrasonovision time image. The tested results show that the algorithm can accurately fit ellipse and correct the eccentrical ultrasonovision time image, which is very important practical significance on processing the well logging.展开更多
Neuromuscular diseases present profound challenges to individuals and healthcare systems worldwide, profoundly impacting motor functions. This research provides a comprehensive exploration of how artificial intelligen...Neuromuscular diseases present profound challenges to individuals and healthcare systems worldwide, profoundly impacting motor functions. This research provides a comprehensive exploration of how artificial intelligence (AI) technology is revolutionizing rehabilitation for individuals with neuromuscular disorders. Through an extensive review, this paper elucidates a wide array of AI-driven interventions spanning robotic-assisted therapy, virtual reality rehabilitation, and intricately tailored machine learning algorithms. The aim is to delve into the nuanced applications of AI, unlocking its transformative potential in optimizing personalized treatment plans for those grappling with the complexities of neuromuscular diseases. By examining the multifaceted intersection of AI and rehabilitation, this paper not only contributes to our understanding of cutting-edge advancements but also envisions a future where technological innovations play a pivotal role in alleviating the challenges posed by neuromuscular diseases. From employing neural-fuzzy adaptive controllers for precise trajectory tracking amidst uncertainties to utilizing machine learning algorithms for recognizing patient motor intentions and adapting training accordingly, this research encompasses a holistic approach towards harnessing AI for enhanced rehabilitation outcomes. By embracing the synergy between AI and rehabilitation, we pave the way for a future where individuals with neuromuscular disorders can access tailored, effective, and technologically-driven interventions to improve their quality of life and functional independence.展开更多
In today’s rapid widespread of digital technologies into all live aspects to enhance efficiency and productivity on the one hand and on the other hand ensure customer engagement, personal data counterfeiting has beco...In today’s rapid widespread of digital technologies into all live aspects to enhance efficiency and productivity on the one hand and on the other hand ensure customer engagement, personal data counterfeiting has become a major concern for businesses and end-users. One solution to ensure data security is encryption, where keys are central. There is therefore a need to find robusts key generation implementation that is effective, inexpensive and non-invasive for protecting and preventing data counterfeiting. In this paper, we use the theory of electromagnetic wave propagation to generate encryption keys.展开更多
Let p be a prime. For any finite p-group G, the deep transfers T H,G ' : H / H ' → G ' / G " from the maximal subgroups H of index (G:H) = p in G to the derived subgroup G ' are introduced as an ...Let p be a prime. For any finite p-group G, the deep transfers T H,G ' : H / H ' → G ' / G " from the maximal subgroups H of index (G:H) = p in G to the derived subgroup G ' are introduced as an innovative tool for identifying G uniquely by means of the family of kernels ùd(G) =(ker(T H,G ')) (G: H) = p. For all finite 3-groups G of coclass cc(G) = 1, the family ùd(G) is determined explicitly. The results are applied to the Galois groups G =Gal(F3 (∞)/ F) of the Hilbert 3-class towers of all real quadratic fields F = Q(√d) with fundamental discriminants d > 1, 3-class group Cl3(F) □ C3 × C3, and total 3-principalization in each of their four unramified cyclic cubic extensions E/F. A systematic statistical evaluation is given for the complete range 1 d 7, and a few exceptional cases are pointed out for 1 d 8.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main ...Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main features of the problem are the strong nonuniform scale of the solution and large errors (up to 15%) in the input data. In both algorithms, the solution is represented as decomposition on special basic functions, which satisfy given a priori information on solution, and this idea allow us significantly to improve the quality of approximate solution and simplify solving the minimization problem. The theoretical details of the algorithms, as well as the results of numerical experiments for proving robustness of the algorithms, are presented.展开更多
This paper presents a binary gravitational search algorithm (BGSA) is applied to solve the problem of optimal allotment of DG sets and Shunt capacitors in radial distribution systems. The problem is formulated as a no...This paper presents a binary gravitational search algorithm (BGSA) is applied to solve the problem of optimal allotment of DG sets and Shunt capacitors in radial distribution systems. The problem is formulated as a nonlinear constrained single-objective optimization problem where the total line loss (TLL) and the total voltage deviations (TVD) are to be minimized separately by incorporating optimal placement of DG units and shunt capacitors with constraints which include limits on voltage, sizes of installed capacitors and DG. This BGSA is applied on the balanced IEEE 10 Bus distribution network and the results are compared with conventional binary particle swarm optimization.展开更多
提出了一种基于最小二乘支持向量机的织物剪切性能预测模型,并且采用遗传算法进行最小二乘支持向量机的参数优化,将获得的样本进行归一化处理后,将其输入预测模型以得到预测结果.仿真结果表明,基于最小二乘支持向量机的预测模型比BP神...提出了一种基于最小二乘支持向量机的织物剪切性能预测模型,并且采用遗传算法进行最小二乘支持向量机的参数优化,将获得的样本进行归一化处理后,将其输入预测模型以得到预测结果.仿真结果表明,基于最小二乘支持向量机的预测模型比BP神经网络和线性回归方法具有更高的精度和范化能力.
Abstract:
A new method is proposed to predict the fabric shearing property with least square support vector machines ( LS-SVM ). The genetic algorithm is investigated to select the parameters of LS-SVM models as a means of improving the LS- SVM prediction. After normalizing the sampling data, the sampling data are inputted into the model to gain the prediction result. The simulation results show the prediction model gives better forecasting accuracy and generalization ability than BP neural network and linear regression method.展开更多
The contradiction of variable step size least mean square(LMS)algorithm between fast convergence speed and small steady-state error has always existed.So,a new algorithm based on the combination of logarithmic and sym...The contradiction of variable step size least mean square(LMS)algorithm between fast convergence speed and small steady-state error has always existed.So,a new algorithm based on the combination of logarithmic and symbolic function and step size factor is proposed.It establishes a new updating method of step factor that is related to step factor and error signal.This work makes an analysis from 3 aspects:theoretical analysis,theoretical verification and specific experiments.The experimental results show that the proposed algorithm is superior to other variable step size algorithms in convergence speed and steady-state error.展开更多
Numerous cryptographic algorithms (ElGamal, Rabin, RSA, NTRU etc) require multiple computations of modulo multiplicative inverses. This paper describes and validates a new algorithm, called the Enhanced Euclid Algorit...Numerous cryptographic algorithms (ElGamal, Rabin, RSA, NTRU etc) require multiple computations of modulo multiplicative inverses. This paper describes and validates a new algorithm, called the Enhanced Euclid Algorithm, for modular multiplicative inverse (MMI). Analysis of the proposed algorithm shows that it is more efficient than the Extended Euclid algorithm (XEA). In addition, if a MMI does not exist, then it is not necessary to use the Backtracking procedure in the proposed algorithm;this case requires fewer operations on every step (divisions, multiplications, additions, assignments and push operations on stack), than the XEA. Overall, XEA uses more multiplications, additions, assignments and twice as many variables than the proposed algorithm.展开更多
In the fingerprint matching-based wireless local area network(WLAN) indoor positioning system,Kalman filter(KF) is usually applied after fingerprint matching algorithms to make positioning results more accurate and co...In the fingerprint matching-based wireless local area network(WLAN) indoor positioning system,Kalman filter(KF) is usually applied after fingerprint matching algorithms to make positioning results more accurate and consecutive.But this method,like most methods in WLAN indoor positioning field,fails to consider and make use of users' moving speed information.In order to make the positioning results more accurate through using the users' moving speed information,a coordinate correction algorithm(CCA) is proposed in this paper.It predicts a reasonable range for positioning coordinates by using the moving speed information.If the real positioning coordinates are not in the predicted range,it means that the positioning coordinates are not reasonable to a moving user in indoor environment,so the proposed CCA is used to correct this kind of positioning coordinates.The simulation results prove that the positioning results by the CCA are more accurate than those calculated by the KF and the CCA is effective to improve the positioning performance.展开更多
In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting....In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval (n-1], then it can be detected after cp(n)=「2log「p/2」+1(n+1)」-1 parallel evaluations of f(x), where p is the number of processors.展开更多
文摘Sometimes boundary value problems have isolated regions where the solution changes rapidly.Therefore,when solving numerically,one needs a fine grid to capture the high activity.The fine grid can be implemented as a composite coarse-fine grid or as a global fine grid.One cheaper way of obtaining the composite grid solution is the use of the local defect correction technique.The technique is an algorithm that combines a global coarse grid solution and a local fine grid solution in an iterative way to estimate the solution on the corresponding composite grid.The algorithm is relatively new and its convergence properties have not been studied for the boundary element method.In this paper the objective is to determine convergence properties of the algorithm for the boundary element method.First,we formulate the algorithm as a fixed point iterative scheme,which has also not been done before for the boundary element method,and then study the properties of the iteration matrix.Results show that we can always expect convergence.Therefore,the algorithm opens up a real alternative for application in the boundary element method for problems with localised regions of high activity.
文摘Suspended particulate matter (SPM) is regarded as an energy source and a water quality indicator in coastal and marine ecosystems. To estimate SPM from ocean color sensors and land observing satellites, an accurate and robust atmospheric correction must be done. We evaluated the capabilities of ocean color and land observing satellite for estimation of SPM concentrations over Louisiana continental shelf in the northern Gulf of Mexico, using the Operational Land Imager (OLI) on Landsat-8, and Moderate Resolution Imaging Spectroradiometer (MODIS) on Aqua. In high turbidity waters, the traditional atmospheric correction algorithms based on near-infrared (NIR) bands underestimate SPM concentrations due to the inaccurate removal of the aerosol contribution to the top of atmosphere signals. Therefore, atmospheric correction in high turbidity waters is a challenge. Four atmospheric correction algorithms were implemented on remote sensing reflectance (Rrs) values to select suitable atmospheric correction algorithms for each sensor in our study area. We evaluated short-wave infrared (SWIR) and NIR atmospheric correction algorithms on Rrs products from Landsat-8 OLI and Management Unit of the North Sea Mathematical Models (MUMM) and SWIR.NIR atmospheric correction algorithms on Rrs products from MODIS-Aqua. SPM was retrieved from a band-ratio SPM-retrieval algorithm for each sensor. Our results indicated that SWIR atmospheric correction algorithm was the suitable algorithm for Landsat-8 OLI and SWIR.NIR atmospheric correction algorithm outperformed MUMM algorithm for MODIS.
文摘In this paper, we present two new algorithms in residue number systems for scaling and error correction. The first algorithm is the Cyclic Property of Residue-Digit Difference (CPRDD). It is used to speed up the residue multiple error correction due to its parallel processes. The second is called the Target Race Distance (TRD). It is used to speed up residue scaling. Both of these two algorithms are used without the need for Mixed Radix Conversion (MRC) or Chinese Residue Theorem (CRT) techniques, which are time consuming and require hardware complexity. Furthermore, the residue scaling can be performed in parallel for any combination of moduli set members without using lookup tables.
基金supported by the National Natural Science Foundation of China(61873278,62173339)。
文摘This paper presents a neighborhood optimal trajectory online correction algorithm considering terminal time variation,and investigates its application range.Firstly,the motion model of midcourse guidance is established,and the online trajectory correction-regenerating strategy is introduced.Secondly,based on the neighborhood optimal control theory,a neighborhood optimal trajectory online correction algorithm considering the terminal time variation is proposed by adding the consideration of terminal time variation to the traditional neighborhood optimal trajectory correction method.Thirdly,the Monte Carlo simulation method is used to analyze the application range of the algorithm,which provides a basis for the division of application domain of the online correction algorithm and the online regeneration algorithm of midcourse guidance trajectory.Finally,the simulation results show that the algorithm has high real-time performance,and the online correction trajectory can meet the requirements of terminal constraint change.The application range of the algorithm is obtained through Monte Carlo simulation.
基金This study was reviewed and approved by the New York-Presbyterian Brooklyn Methodist Hospital Institutional Review Committee.The study follows the guidelines outlined in the Declaration of Helsinki.
文摘Objective:Accurate measurement of QT interval,the ventricular action potential from depolarization to repolarization,is important for the early detection of Long QT syndrome.The most effective QT correction(QTc)formula has yet to be determined in the pediatric population,although it has intrinsically greater extremes in heart rate(HR)and is more susceptible to errors in measurement.The authors of this study compare six dif-ferent QTc methods(Bazett,Fridericia,Framingham,Hodges,Rautaharju,and a computer algorithm utilizing the Bazett formula)for consistency against variations in HR and RR interval.Methods:Descriptive Retrospective Study.We included participants from a pediatric cardiology practice of a community hospital who had an ECG performed in 2017.All participants were healthy patients with no past medical history and no regular med-ications.Results:ECGs from 95 participants from one month to 21 years of age(mean 9.7 years)were included with a mean HR of 91 beats per minute(bpm).The two-sample paired t-test or Wilcoxon signed-rank test assessed for any difference between QTc methods.A statistically significant difference was observed between every combination of two QTc formulae.The Spearman’s rank correlation analysis explored the QTc/HR and QTc/RR relationships for each formula.Fridericia method was most independent of HR and RR with the lowest absolute value of correlation coefficients.Bazett and Computer had moderate correlations,while Framingham and Rautaharju exhibited strong correlations.Correlations were positive for Bazett and Computer,reflecting results from prior studies demonstrating an over-correction of Bazett at higher HRs.In the linear QTc/HR regression analysis,Bazett had the slope closest to zero,although Computer,Hodges,and Fridericia had comparable values.Alternatively,Fridericia had the linear QTc/RR regression coefficient closest to zero.The Bland-Altman method assessed for bias and the limits of agreement between correction formulae.Bazett and Computer exhibited good agreement with minimal bias along with Framingham and Rautaharju.To account for a possible skewed distri-bution of QT,all the above analyses were also performed excluding the top and bottom 2%of data as sorted by heart rate ranges(N=90).Results from this data set were consistent with those derived from all participants(N=95).Conclusions:Overall,the Fridericia correction method provided the best rate correction in our pedia-tric study cohort.
文摘In high-altitude nuclear detonations,the proportion of pulsed X-ray energy can exceed 70%,making it a specific monitoring signal for such events.These pulsed X-rays can be captured using a satellite-borne X-ray detector following atmospheric transmission.To quantitatively analyze the effects of different satellite detection altitudes,burst heights,and transmission angles on the physical processes of X-ray transport and energy fluence,we developed an atmospheric transmission algorithm for pulsed X-rays from high-altitude nuclear detonations based on scattering correction.The proposed method is an improvement over the traditional analytical method that only computes direct-transmission X-rays.The traditional analytical method exhibits a maximum relative error of 67.79% compared with the Monte Carlo method.Our improved method reduces this error to within 10% under the same conditions,even reaching 1% in certain scenarios.Moreover,its computation time is 48,000 times faster than that of the Monte Carlo method.These results have important theoretical significance and engineering application value for designing satellite-borne nuclear detonation pulsed X-ray detectors,inverting nuclear detonation source terms,and assessing ionospheric effects.
文摘 First of all it is necessary to point out that 'reciting' is the wrong term for what Chinese students are often asked to do when they are learning English. The correct terms are, 'learning by heart' or 'rote learning'. In this article the term 'rote learning' will be used.……
基金the Deputyship for Research and Innovation,“Ministry of Education”in Saudi Arabia for funding this research(IFKSUOR3-014-3).
文摘In this study,our aim is to address the problem of gene selection by proposing a hybrid bio-inspired evolutionary algorithm that combines Grey Wolf Optimization(GWO)with Harris Hawks Optimization(HHO)for feature selection.Themotivation for utilizingGWOandHHOstems fromtheir bio-inspired nature and their demonstrated success in optimization problems.We aimto leverage the strengths of these algorithms to enhance the effectiveness of feature selection in microarray-based cancer classification.We selected leave-one-out cross-validation(LOOCV)to evaluate the performance of both two widely used classifiers,k-nearest neighbors(KNN)and support vector machine(SVM),on high-dimensional cancer microarray data.The proposed method is extensively tested on six publicly available cancer microarray datasets,and a comprehensive comparison with recently published methods is conducted.Our hybrid algorithm demonstrates its effectiveness in improving classification performance,Surpassing alternative approaches in terms of precision.The outcomes confirm the capability of our method to substantially improve both the precision and efficiency of cancer classification,thereby advancing the development ofmore efficient treatment strategies.The proposed hybridmethod offers a promising solution to the gene selection problem in microarray-based cancer classification.It improves the accuracy and efficiency of cancer diagnosis and treatment,and its superior performance compared to other methods highlights its potential applicability in realworld cancer classification tasks.By harnessing the complementary search mechanisms of GWO and HHO,we leverage their bio-inspired behavior to identify informative genes relevant to cancer diagnosis and treatment.
文摘The ultrasonovision image caused by the tool eccentricity can often present two pieces of vertical black strips in the Casing Well. To solve this problem, this paper proposes a correction algorithm of time eccentricity image based on ellipse fitting algorithm. This algorithm firstly utilizes borehole diameter data to fit ellipse and compute ellipse’s center, major axis, minor axis and inclination angle and other parameters, and then uses these parameters to correct eccentrical ultrasonovision time image. The tested results show that the algorithm can accurately fit ellipse and correct the eccentrical ultrasonovision time image, which is very important practical significance on processing the well logging.
文摘Neuromuscular diseases present profound challenges to individuals and healthcare systems worldwide, profoundly impacting motor functions. This research provides a comprehensive exploration of how artificial intelligence (AI) technology is revolutionizing rehabilitation for individuals with neuromuscular disorders. Through an extensive review, this paper elucidates a wide array of AI-driven interventions spanning robotic-assisted therapy, virtual reality rehabilitation, and intricately tailored machine learning algorithms. The aim is to delve into the nuanced applications of AI, unlocking its transformative potential in optimizing personalized treatment plans for those grappling with the complexities of neuromuscular diseases. By examining the multifaceted intersection of AI and rehabilitation, this paper not only contributes to our understanding of cutting-edge advancements but also envisions a future where technological innovations play a pivotal role in alleviating the challenges posed by neuromuscular diseases. From employing neural-fuzzy adaptive controllers for precise trajectory tracking amidst uncertainties to utilizing machine learning algorithms for recognizing patient motor intentions and adapting training accordingly, this research encompasses a holistic approach towards harnessing AI for enhanced rehabilitation outcomes. By embracing the synergy between AI and rehabilitation, we pave the way for a future where individuals with neuromuscular disorders can access tailored, effective, and technologically-driven interventions to improve their quality of life and functional independence.
文摘In today’s rapid widespread of digital technologies into all live aspects to enhance efficiency and productivity on the one hand and on the other hand ensure customer engagement, personal data counterfeiting has become a major concern for businesses and end-users. One solution to ensure data security is encryption, where keys are central. There is therefore a need to find robusts key generation implementation that is effective, inexpensive and non-invasive for protecting and preventing data counterfeiting. In this paper, we use the theory of electromagnetic wave propagation to generate encryption keys.
文摘Let p be a prime. For any finite p-group G, the deep transfers T H,G ' : H / H ' → G ' / G " from the maximal subgroups H of index (G:H) = p in G to the derived subgroup G ' are introduced as an innovative tool for identifying G uniquely by means of the family of kernels ùd(G) =(ker(T H,G ')) (G: H) = p. For all finite 3-groups G of coclass cc(G) = 1, the family ùd(G) is determined explicitly. The results are applied to the Galois groups G =Gal(F3 (∞)/ F) of the Hilbert 3-class towers of all real quadratic fields F = Q(√d) with fundamental discriminants d > 1, 3-class group Cl3(F) □ C3 × C3, and total 3-principalization in each of their four unramified cyclic cubic extensions E/F. A systematic statistical evaluation is given for the complete range 1 d 7, and a few exceptional cases are pointed out for 1 d 8.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
文摘Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main features of the problem are the strong nonuniform scale of the solution and large errors (up to 15%) in the input data. In both algorithms, the solution is represented as decomposition on special basic functions, which satisfy given a priori information on solution, and this idea allow us significantly to improve the quality of approximate solution and simplify solving the minimization problem. The theoretical details of the algorithms, as well as the results of numerical experiments for proving robustness of the algorithms, are presented.
文摘This paper presents a binary gravitational search algorithm (BGSA) is applied to solve the problem of optimal allotment of DG sets and Shunt capacitors in radial distribution systems. The problem is formulated as a nonlinear constrained single-objective optimization problem where the total line loss (TLL) and the total voltage deviations (TVD) are to be minimized separately by incorporating optimal placement of DG units and shunt capacitors with constraints which include limits on voltage, sizes of installed capacitors and DG. This BGSA is applied on the balanced IEEE 10 Bus distribution network and the results are compared with conventional binary particle swarm optimization.
文摘提出了一种基于最小二乘支持向量机的织物剪切性能预测模型,并且采用遗传算法进行最小二乘支持向量机的参数优化,将获得的样本进行归一化处理后,将其输入预测模型以得到预测结果.仿真结果表明,基于最小二乘支持向量机的预测模型比BP神经网络和线性回归方法具有更高的精度和范化能力.
Abstract:
A new method is proposed to predict the fabric shearing property with least square support vector machines ( LS-SVM ). The genetic algorithm is investigated to select the parameters of LS-SVM models as a means of improving the LS- SVM prediction. After normalizing the sampling data, the sampling data are inputted into the model to gain the prediction result. The simulation results show the prediction model gives better forecasting accuracy and generalization ability than BP neural network and linear regression method.
基金the National Natural Science Foundation of China(No.51575328,61503232).
文摘The contradiction of variable step size least mean square(LMS)algorithm between fast convergence speed and small steady-state error has always existed.So,a new algorithm based on the combination of logarithmic and symbolic function and step size factor is proposed.It establishes a new updating method of step factor that is related to step factor and error signal.This work makes an analysis from 3 aspects:theoretical analysis,theoretical verification and specific experiments.The experimental results show that the proposed algorithm is superior to other variable step size algorithms in convergence speed and steady-state error.
文摘Numerous cryptographic algorithms (ElGamal, Rabin, RSA, NTRU etc) require multiple computations of modulo multiplicative inverses. This paper describes and validates a new algorithm, called the Enhanced Euclid Algorithm, for modular multiplicative inverse (MMI). Analysis of the proposed algorithm shows that it is more efficient than the Extended Euclid algorithm (XEA). In addition, if a MMI does not exist, then it is not necessary to use the Backtracking procedure in the proposed algorithm;this case requires fewer operations on every step (divisions, multiplications, additions, assignments and push operations on stack), than the XEA. Overall, XEA uses more multiplications, additions, assignments and twice as many variables than the proposed algorithm.
基金Sponsored by the High Technology Research and Development Program of China (Grant No. 2008AA12Z305)
文摘In the fingerprint matching-based wireless local area network(WLAN) indoor positioning system,Kalman filter(KF) is usually applied after fingerprint matching algorithms to make positioning results more accurate and consecutive.But this method,like most methods in WLAN indoor positioning field,fails to consider and make use of users' moving speed information.In order to make the positioning results more accurate through using the users' moving speed information,a coordinate correction algorithm(CCA) is proposed in this paper.It predicts a reasonable range for positioning coordinates by using the moving speed information.If the real positioning coordinates are not in the predicted range,it means that the positioning coordinates are not reasonable to a moving user in indoor environment,so the proposed CCA is used to correct this kind of positioning coordinates.The simulation results prove that the positioning results by the CCA are more accurate than those calculated by the KF and the CCA is effective to improve the positioning performance.
文摘In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval (n-1], then it can be detected after cp(n)=「2log「p/2」+1(n+1)」-1 parallel evaluations of f(x), where p is the number of processors.