The primary aim of the power system grounding is to safeguard the person and satisfying the performance of the power systemtomaintain reliable operation.With equal conductor spacing grounding grid design,the distribut...The primary aim of the power system grounding is to safeguard the person and satisfying the performance of the power systemtomaintain reliable operation.With equal conductor spacing grounding grid design,the distribution of the current in the grid is not uniform.Hence,unequal grid conductor span in which grid conductors are concentrated more at the periphery is safer to practice than equal spacing.This paper presents the comparative analysis of two novel techniques that create unequal spacing among the grid conductors:the least-square curve fitting technique and the compression ratio techniquewith equal grid configuration for both square and rectangular grids.Particle Swarm Optimization(PSO)is adopted for finding out one optimal feasible solution among many feasible solutions of equal grid configuration for both square and rectangular grids.Comparative analysis is also carried out between square and rectangular grids using the least square curve fitting technique as it results in only one unequal grid configuration.Simulation results are obtained by theMATLAB software developed.Percentage of improvement in ground potential rise,step voltage,touch voltage,and grid resistancewith variation in compression ratios are plotted.展开更多
Weighted total least squares(WTLS)have been regarded as the standard tool for the errors-in-variables(EIV)model in which all the elements in the observation vector and the coefficient matrix are contaminated with rand...Weighted total least squares(WTLS)have been regarded as the standard tool for the errors-in-variables(EIV)model in which all the elements in the observation vector and the coefficient matrix are contaminated with random errors.However,in many geodetic applications,some elements are error-free and some random observations appear repeatedly in different positions in the augmented coefficient matrix.It is called the linear structured EIV(LSEIV)model.Two kinds of methods are proposed for the LSEIV model from functional and stochastic modifications.On the one hand,the functional part of the LSEIV model is modified into the errors-in-observations(EIO)model.On the other hand,the stochastic model is modified by applying the Moore-Penrose inverse of the cofactor matrix.The algorithms are derived through the Lagrange multipliers method and linear approximation.The estimation principles and iterative formula of the parameters are proven to be consistent.The first-order approximate variance-covariance matrix(VCM)of the parameters is also derived.A numerical example is given to compare the performances of our proposed three algorithms with the STLS approach.Afterwards,the least squares(LS),total least squares(TLS)and linear structured weighted total least squares(LSWTLS)solutions are compared and the accuracy evaluation formula is proven to be feasible and effective.Finally,the LSWTLS is applied to the field of deformation analysis,which yields a better result than the traditional LS and TLS estimations.展开更多
Recently,researchers have proposed an emitter localization method based on passive synthetic aperture.However,the unknown residual frequency offset(RFO)between the transmit-ter and the receiver causes the received Dop...Recently,researchers have proposed an emitter localization method based on passive synthetic aperture.However,the unknown residual frequency offset(RFO)between the transmit-ter and the receiver causes the received Doppler signal to shift,which affects the localization accu-racy.To solve this issue,this paper proposes a RFO estimation method based on range migration fitting.Due to the high frequency modulation slope of the linear frequency modulation(LFM)-mod-ulation radar signal,it is not affected by RFO in range compression.Therefore,the azimuth time can be estimated by fitting the peak value position of the pulse compression in range direction.Then,the matched filters are designed under different RFOs.When the zero-Doppler time obtained by the matched filters is consistent with the estimated azimuth time,the given RFO is the real RFO between the transceivers.The simulation results show that the estimation error of azimuth distance does not exceed 20 m when the received signal duration is not less than 3 s,the pulse repe-tition frequency(PRF)of the transmitter radar signal is not less than 1 kHz,the range detection is not larger than 1000 km,and the signal noise ratio(SNR)is not less than-5 dB.展开更多
The distribution network exhibits complex structural characteristics,which makes fault localization a challenging task.Especially when a branch of the multi-branch distribution network fails,the traditional multi-bran...The distribution network exhibits complex structural characteristics,which makes fault localization a challenging task.Especially when a branch of the multi-branch distribution network fails,the traditional multi-branch fault location algorithm makes it difficult to meet the demands of high-precision fault localization in the multi-branch distribution network system.In this paper,the multi-branch mainline is decomposed into single branch lines,transforming the complex multi-branch fault location problem into a double-ended fault location problem.Based on the different transmission characteristics of the fault-traveling wave in fault lines and non-fault lines,the endpoint reference time difference matrix S and the fault time difference matrix G were established.The time variation rule of the fault-traveling wave arriving at each endpoint before and after a fault was comprehensively utilized.To realize the fault segment location,the least square method was introduced.It was used to find the first-order fitting relation that satisfies the matching relationship between the corresponding row vector and the first-order function in the two matrices,to realize the fault segment location.Then,the time difference matrix is used to determine the traveling wave velocity,which,combined with the double-ended traveling wave location,enables accurate fault location.展开更多
One-class classification problem has become a popular problem in many fields, with a wide range of applications in anomaly detection, fault diagnosis, and face recognition. We investigate the one-class classification ...One-class classification problem has become a popular problem in many fields, with a wide range of applications in anomaly detection, fault diagnosis, and face recognition. We investigate the one-class classification problem for second-order tensor data. Traditional vector-based one-class classification methods such as one-class support vector machine (OCSVM) and least squares one-class support vector machine (LSOCSVM) have limitations when tensor is used as input data, so we propose a new tensor one-class classification method, LSOCSTM, which directly uses tensor as input data. On one hand, using tensor as input data not only enables to classify tensor data, but also for vector data, classifying it after high dimensionalizing it into tensor still improves the classification accuracy and overcomes the over-fitting problem. On the other hand, different from one-class support tensor machine (OCSTM), we use squared loss instead of the original loss function so that we solve a series of linear equations instead of quadratic programming problems. Therefore, we use the distance to the hyperplane as a metric for classification, and the proposed method is more accurate and faster compared to existing methods. The experimental results show the high efficiency of the proposed method compared with several state-of-the-art methods.展开更多
In response to the complex characteristics of actual low-permeability tight reservoirs,this study develops a meshless-based numerical simulation method for oil-water two-phase flow in these reservoirs,considering comp...In response to the complex characteristics of actual low-permeability tight reservoirs,this study develops a meshless-based numerical simulation method for oil-water two-phase flow in these reservoirs,considering complex boundary shapes.Utilizing radial basis function point interpolation,the method approximates shape functions for unknown functions within the nodal influence domain.The shape functions constructed by the aforementioned meshless interpolation method haveδ-function properties,which facilitate the handling of essential aspects like the controlled bottom-hole flow pressure in horizontal wells.Moreover,the meshless method offers greater flexibility and freedom compared to grid cell discretization,making it simpler to discretize complex geometries.A variational principle for the flow control equation group is introduced using a weighted least squares meshless method,and the pressure distribution is solved implicitly.Example results demonstrate that the computational outcomes of the meshless point cloud model,which has a relatively small degree of freedom,are in close agreement with those of the Discrete Fracture Model(DFM)employing refined grid partitioning,with pressure calculation accuracy exceeding 98.2%.Compared to high-resolution grid-based computational methods,the meshless method can achieve a better balance between computational efficiency and accuracy.Additionally,the impact of fracture half-length on the productivity of horizontal wells is discussed.The results indicate that increasing the fracture half-length is an effective strategy for enhancing production from the perspective of cumulative oil production.展开更多
This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations...This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations and can lead to varied or similar results in terms of precision and performance under certain assumptions. The article underlines the importance of comparing these two approaches to choose the one best suited to the context, available data and modeling objectives.展开更多
Laminated composites are widely used in many engineering industries such as aircraft, spacecraft, boat hulls, racing car bodies, and storage tanks. We analyze the 3D deformations of a multilayered, linear elastic, ani...Laminated composites are widely used in many engineering industries such as aircraft, spacecraft, boat hulls, racing car bodies, and storage tanks. We analyze the 3D deformations of a multilayered, linear elastic, anisotropic rectangular plate subjected to arbitrary boundary conditions on one edge and simply supported on other edge. The rectangular laminate consists of anisotropic and homogeneous laminae of arbitrary thicknesses. This study presents the elastic analysis of laminated composite plates subjected to sinusoidal mechanical loading under arbitrary boundary conditions. Least square finite element solutions for displacements and stresses are investigated using a mathematical model, called a state-space model, which allows us to simultaneously solve for these field variables in the composite structure’s domain and ensure that continuity conditions are satisfied at layer interfaces. The governing equations are derived from this model using a numerical technique called the least-squares finite element method (LSFEM). These LSFEMs seek to minimize the squares of the governing equations and the associated side conditions residuals over the computational domain. The model is comprised of layerwise variables such as displacements, out-of-plane stresses, and in- plane strains, treated as independent variables. Numerical results are presented to demonstrate the response of the laminated composite plates under various arbitrary boundary conditions using LSFEM and compared with the 3D elasticity solution available in the literature.展开更多
The main purpose of reverse engineering is to convert discrete data pointsinto piecewise smooth, continuous surface models. Before carrying out model reconstruction it issignificant to extract geometric features becau...The main purpose of reverse engineering is to convert discrete data pointsinto piecewise smooth, continuous surface models. Before carrying out model reconstruction it issignificant to extract geometric features because the quality of modeling greatly depends on therepresentation of features. Some fitting techniques of natural quadric surfaces with least-squaresmethod are described. And these techniques can be directly used to extract quadric surfaces featuresduring the process of segmentation for point cloud.展开更多
On 30 January 2020 World Health Organization (WHO), declared the novel corona virus as a Public Health Emergency of International Concern (PHEIC), COVID-19 virus as an epidemic transmitted virus. It was on 31 December...On 30 January 2020 World Health Organization (WHO), declared the novel corona virus as a Public Health Emergency of International Concern (PHEIC), COVID-19 virus as an epidemic transmitted virus. It was on 31 December 2019, the WHO China Country office was informed the cases of pneumonia unknown etiology detected in Wuhan city, Hubei Province of China. Just after WHO’s declaration, Ethiopia has taken different measures to protect from this public health emergency problem. The disease is human to human transmitted virus. It comes from outside of the country, so that it opens check points in different entrance of the country. However, in 13 March 2020 the first positive case was reported from Japanese man. The virus is continuing the transmission in the public progressively more. While this research has been working, within 90 days from the first case, the country reported 2506 positive cases, 35 deaths. The research has done after collecting the first 90 days of data in Ethiopian case. Daily report announced by Ethiopian MoH is based on the test. And hence, the reported data as positive cases with COVID-19 is not actual positive case data in the country. There for, this paper has contribution for planning and taking further measure on the viruses by demonstrating the next 90 days predictive data. I use best curve fitting analysis using python function of the module polyfit algorithm to predict the trend of COVID-19 cases in Ethiopia.展开更多
For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method...For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.展开更多
An enhanced small-signal model is introduced to model the influence of the impact ionization effect on the performance of In As/Al Sb HFET, in which an optimized fitting function D(ωτi) in the form of least square...An enhanced small-signal model is introduced to model the influence of the impact ionization effect on the performance of In As/Al Sb HFET, in which an optimized fitting function D(ωτi) in the form of least square approximation is proposed in order to further enhance the accuracy in modeling the frequency dependency of the impact ionization effect.The enhanced model with D(ωτi) can accurately characterize the key S parameters of In As/Al Sb HFET in a wide frequency range with a very low error function EF. It is demonstrated that the new fitting function D(ωτi) is helpful in further improving the modeling accuracy degree.展开更多
This paper starts with untime-diversification of the time-diversification deformation model and gives displacement distribution model of untime-diversification and simplifies further the study of deformation model. Th...This paper starts with untime-diversification of the time-diversification deformation model and gives displacement distribution model of untime-diversification and simplifies further the study of deformation model. The paper discusses the problem of least squares fitting of coordinate parameters model—parameters of deformation model. During discussion, the basic means of cubic B splines and two steps of multidimensional disorder datum fitting are adopted which can make fitting function calculated mostly approximate coordinate parameters model and it can make calculation easier.展开更多
For physical ozone absorption without reaction,two parametric estimation methods,i.e.the common linear least square fitting and non-linear Simplex search methods,were applied,respectively,to determine the ozone mass t...For physical ozone absorption without reaction,two parametric estimation methods,i.e.the common linear least square fitting and non-linear Simplex search methods,were applied,respectively,to determine the ozone mass transfer coefficient during absorption and both methods give almost the same mass transfer coefficient.While for chemical absorption with ozone decomposition reaction,the common linear least square fitting method is not applicable for the evaluation of ozone mass transfer coefficient due to the difficulty of model linearization for describing ozone concentration dissolved in water.The nonlinear Simplex method obtains the mass transfer coefficient by minimizing the sum of the differences between the simulated and experimental ozone concentration during the whole absorption process,without the limitation of linear relationship between the dissolved ozone concentration and absorption time during the initial stage of absorption.Comparison of the ozone concentration profiles between the simulation and experimental data demonstrates that Simplex method may determine ozone mass transfer coefficient during absorption in an accurate and high efficiency way with wide applicability.展开更多
Analysis of stock recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually re...Analysis of stock recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.展开更多
Wireless sensor network(WSN)positioning has a good effect on indoor positioning,so it has received extensive attention in the field of positioning.Non-line-of sight(NLOS)is a primary challenge in indoor complex enviro...Wireless sensor network(WSN)positioning has a good effect on indoor positioning,so it has received extensive attention in the field of positioning.Non-line-of sight(NLOS)is a primary challenge in indoor complex environment.In this paper,a robust localization algorithm based on Gaussian mixture model and fitting polynomial is proposed to solve the problem of NLOS error.Firstly,fitting polynomials are used to predict the measured values.The residuals of predicted and measured values are clustered by Gaussian mixture model(GMM).The LOS probability and NLOS probability are calculated according to the clustering centers.The measured values are filtered by Kalman filter(KF),variable parameter unscented Kalman filter(VPUKF)and variable parameter particle filter(VPPF)in turn.The distance value processed by KF and VPUKF and the distance value processed by KF,VPUKF and VPPF are combined according to probability.Finally,the maximum likelihood method is used to calculate the position coordinate estimation.Through simulation comparison,the proposed algorithm has better positioning accuracy than several comparison algorithms in this paper.And it shows strong robustness in strong NLOS environment.展开更多
In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived ...In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.展开更多
In factor analysis, a factor loading matrix is often rotated to a simple target matrix for its simplicity. For the purpose, Procrustes rotation minimizes the discrepancy between the target and rotated loadings using t...In factor analysis, a factor loading matrix is often rotated to a simple target matrix for its simplicity. For the purpose, Procrustes rotation minimizes the discrepancy between the target and rotated loadings using two types of approximation: 1) approximate the zeros in the target by the non-zeros in the loadings, and 2) approximate the non-zeros in the target by the non-zeros in the loadings. The central issue of Procrustes rotation considered in the article is that it equally treats the two types of approximation, while the former is more important for simplifying the loading matrix. Furthermore, a well-known issue of Simplimax is the computational inefficiency in estimating the sparse target matrix, which yields a considerable number of local minima. The research proposes a new rotation procedure that consists of the following two stages. The first stage estimates sparse target matrix with lesser computational cost by regularization technique. In the second stage, a loading matrix is rotated to the target, emphasizing on the approximation of non-zeros to zeros in the target by least squares criterion with generalized weighing that is newly proposed by the study. The simulation study and real data examples revealed that the proposed method surely simplifies loading matrices.展开更多
This paper proposes a method combining blue the Haar wavelet and the least square to solve the multi-dimensional stochastic Ito-Volterra integral equation.This approach is to transform stochastic integral equations in...This paper proposes a method combining blue the Haar wavelet and the least square to solve the multi-dimensional stochastic Ito-Volterra integral equation.This approach is to transform stochastic integral equations into a system of algebraic equations.Meanwhile,the error analysis is proven.Finally,the effectiveness of the approach is verified by two numerical examples.展开更多
Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsi...Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsity.Therefore,it is difficult for LSPTSVM to process large-scale datasets with outliers.In this paper,we propose a robust LSPTSVM model(called R-LSPTSVM)by applying truncated least squares loss function.The robustness of R-LSPTSVM is proved from a weighted perspective.Furthermore,we obtain the sparse solution of R-LSPTSVM by using the pivoting Cholesky factorization method in primal space.Finally,the sparse R-LSPTSVM algorithm(SR-LSPTSVM)is proposed.Experimental results show that SR-LSPTSVM is insensitive to outliers and can deal with large-scale datasets fastly.展开更多
文摘The primary aim of the power system grounding is to safeguard the person and satisfying the performance of the power systemtomaintain reliable operation.With equal conductor spacing grounding grid design,the distribution of the current in the grid is not uniform.Hence,unequal grid conductor span in which grid conductors are concentrated more at the periphery is safer to practice than equal spacing.This paper presents the comparative analysis of two novel techniques that create unequal spacing among the grid conductors:the least-square curve fitting technique and the compression ratio techniquewith equal grid configuration for both square and rectangular grids.Particle Swarm Optimization(PSO)is adopted for finding out one optimal feasible solution among many feasible solutions of equal grid configuration for both square and rectangular grids.Comparative analysis is also carried out between square and rectangular grids using the least square curve fitting technique as it results in only one unequal grid configuration.Simulation results are obtained by theMATLAB software developed.Percentage of improvement in ground potential rise,step voltage,touch voltage,and grid resistancewith variation in compression ratios are plotted.
基金the financial support of the National Natural Science Foundation of China(Grant No.42074016,42104025,42274057and 41704007)Hunan Provincial Natural Science Foundation of China(Grant No.2021JJ30244)Scientific Research Fund of Hunan Provincial Education Department(Grant No.22B0496)。
文摘Weighted total least squares(WTLS)have been regarded as the standard tool for the errors-in-variables(EIV)model in which all the elements in the observation vector and the coefficient matrix are contaminated with random errors.However,in many geodetic applications,some elements are error-free and some random observations appear repeatedly in different positions in the augmented coefficient matrix.It is called the linear structured EIV(LSEIV)model.Two kinds of methods are proposed for the LSEIV model from functional and stochastic modifications.On the one hand,the functional part of the LSEIV model is modified into the errors-in-observations(EIO)model.On the other hand,the stochastic model is modified by applying the Moore-Penrose inverse of the cofactor matrix.The algorithms are derived through the Lagrange multipliers method and linear approximation.The estimation principles and iterative formula of the parameters are proven to be consistent.The first-order approximate variance-covariance matrix(VCM)of the parameters is also derived.A numerical example is given to compare the performances of our proposed three algorithms with the STLS approach.Afterwards,the least squares(LS),total least squares(TLS)and linear structured weighted total least squares(LSWTLS)solutions are compared and the accuracy evaluation formula is proven to be feasible and effective.Finally,the LSWTLS is applied to the field of deformation analysis,which yields a better result than the traditional LS and TLS estimations.
基金supported in part by the National Natural Foundation of China(No.62027801).
文摘Recently,researchers have proposed an emitter localization method based on passive synthetic aperture.However,the unknown residual frequency offset(RFO)between the transmit-ter and the receiver causes the received Doppler signal to shift,which affects the localization accu-racy.To solve this issue,this paper proposes a RFO estimation method based on range migration fitting.Due to the high frequency modulation slope of the linear frequency modulation(LFM)-mod-ulation radar signal,it is not affected by RFO in range compression.Therefore,the azimuth time can be estimated by fitting the peak value position of the pulse compression in range direction.Then,the matched filters are designed under different RFOs.When the zero-Doppler time obtained by the matched filters is consistent with the estimated azimuth time,the given RFO is the real RFO between the transceivers.The simulation results show that the estimation error of azimuth distance does not exceed 20 m when the received signal duration is not less than 3 s,the pulse repe-tition frequency(PRF)of the transmitter radar signal is not less than 1 kHz,the range detection is not larger than 1000 km,and the signal noise ratio(SNR)is not less than-5 dB.
基金This work was funded by the project of State Grid Hunan Electric Power Research Institute(No.SGHNDK00PWJS2210033).
文摘The distribution network exhibits complex structural characteristics,which makes fault localization a challenging task.Especially when a branch of the multi-branch distribution network fails,the traditional multi-branch fault location algorithm makes it difficult to meet the demands of high-precision fault localization in the multi-branch distribution network system.In this paper,the multi-branch mainline is decomposed into single branch lines,transforming the complex multi-branch fault location problem into a double-ended fault location problem.Based on the different transmission characteristics of the fault-traveling wave in fault lines and non-fault lines,the endpoint reference time difference matrix S and the fault time difference matrix G were established.The time variation rule of the fault-traveling wave arriving at each endpoint before and after a fault was comprehensively utilized.To realize the fault segment location,the least square method was introduced.It was used to find the first-order fitting relation that satisfies the matching relationship between the corresponding row vector and the first-order function in the two matrices,to realize the fault segment location.Then,the time difference matrix is used to determine the traveling wave velocity,which,combined with the double-ended traveling wave location,enables accurate fault location.
文摘One-class classification problem has become a popular problem in many fields, with a wide range of applications in anomaly detection, fault diagnosis, and face recognition. We investigate the one-class classification problem for second-order tensor data. Traditional vector-based one-class classification methods such as one-class support vector machine (OCSVM) and least squares one-class support vector machine (LSOCSVM) have limitations when tensor is used as input data, so we propose a new tensor one-class classification method, LSOCSTM, which directly uses tensor as input data. On one hand, using tensor as input data not only enables to classify tensor data, but also for vector data, classifying it after high dimensionalizing it into tensor still improves the classification accuracy and overcomes the over-fitting problem. On the other hand, different from one-class support tensor machine (OCSTM), we use squared loss instead of the original loss function so that we solve a series of linear equations instead of quadratic programming problems. Therefore, we use the distance to the hyperplane as a metric for classification, and the proposed method is more accurate and faster compared to existing methods. The experimental results show the high efficiency of the proposed method compared with several state-of-the-art methods.
文摘In response to the complex characteristics of actual low-permeability tight reservoirs,this study develops a meshless-based numerical simulation method for oil-water two-phase flow in these reservoirs,considering complex boundary shapes.Utilizing radial basis function point interpolation,the method approximates shape functions for unknown functions within the nodal influence domain.The shape functions constructed by the aforementioned meshless interpolation method haveδ-function properties,which facilitate the handling of essential aspects like the controlled bottom-hole flow pressure in horizontal wells.Moreover,the meshless method offers greater flexibility and freedom compared to grid cell discretization,making it simpler to discretize complex geometries.A variational principle for the flow control equation group is introduced using a weighted least squares meshless method,and the pressure distribution is solved implicitly.Example results demonstrate that the computational outcomes of the meshless point cloud model,which has a relatively small degree of freedom,are in close agreement with those of the Discrete Fracture Model(DFM)employing refined grid partitioning,with pressure calculation accuracy exceeding 98.2%.Compared to high-resolution grid-based computational methods,the meshless method can achieve a better balance between computational efficiency and accuracy.Additionally,the impact of fracture half-length on the productivity of horizontal wells is discussed.The results indicate that increasing the fracture half-length is an effective strategy for enhancing production from the perspective of cumulative oil production.
文摘This article explores the comparison between the probability method and the least squares method in the design of linear predictive models. It points out that these two approaches have distinct theoretical foundations and can lead to varied or similar results in terms of precision and performance under certain assumptions. The article underlines the importance of comparing these two approaches to choose the one best suited to the context, available data and modeling objectives.
文摘Laminated composites are widely used in many engineering industries such as aircraft, spacecraft, boat hulls, racing car bodies, and storage tanks. We analyze the 3D deformations of a multilayered, linear elastic, anisotropic rectangular plate subjected to arbitrary boundary conditions on one edge and simply supported on other edge. The rectangular laminate consists of anisotropic and homogeneous laminae of arbitrary thicknesses. This study presents the elastic analysis of laminated composite plates subjected to sinusoidal mechanical loading under arbitrary boundary conditions. Least square finite element solutions for displacements and stresses are investigated using a mathematical model, called a state-space model, which allows us to simultaneously solve for these field variables in the composite structure’s domain and ensure that continuity conditions are satisfied at layer interfaces. The governing equations are derived from this model using a numerical technique called the least-squares finite element method (LSFEM). These LSFEMs seek to minimize the squares of the governing equations and the associated side conditions residuals over the computational domain. The model is comprised of layerwise variables such as displacements, out-of-plane stresses, and in- plane strains, treated as independent variables. Numerical results are presented to demonstrate the response of the laminated composite plates under various arbitrary boundary conditions using LSFEM and compared with the 3D elasticity solution available in the literature.
基金This project is supported by Research Foundation for Doctoral Program of Higher Education, China (No.98033532)
文摘The main purpose of reverse engineering is to convert discrete data pointsinto piecewise smooth, continuous surface models. Before carrying out model reconstruction it issignificant to extract geometric features because the quality of modeling greatly depends on therepresentation of features. Some fitting techniques of natural quadric surfaces with least-squaresmethod are described. And these techniques can be directly used to extract quadric surfaces featuresduring the process of segmentation for point cloud.
文摘On 30 January 2020 World Health Organization (WHO), declared the novel corona virus as a Public Health Emergency of International Concern (PHEIC), COVID-19 virus as an epidemic transmitted virus. It was on 31 December 2019, the WHO China Country office was informed the cases of pneumonia unknown etiology detected in Wuhan city, Hubei Province of China. Just after WHO’s declaration, Ethiopia has taken different measures to protect from this public health emergency problem. The disease is human to human transmitted virus. It comes from outside of the country, so that it opens check points in different entrance of the country. However, in 13 March 2020 the first positive case was reported from Japanese man. The virus is continuing the transmission in the public progressively more. While this research has been working, within 90 days from the first case, the country reported 2506 positive cases, 35 deaths. The research has done after collecting the first 90 days of data in Ethiopian case. Daily report announced by Ethiopian MoH is based on the test. And hence, the reported data as positive cases with COVID-19 is not actual positive case data in the country. There for, this paper has contribution for planning and taking further measure on the viruses by demonstrating the next 90 days predictive data. I use best curve fitting analysis using python function of the module polyfit algorithm to predict the trend of COVID-19 cases in Ethiopia.
基金supported by the Preeminent Youth Fund of Sichuan Province,China(Grant No.2012JQ0012)the National Natural Science Foundation of China(Grant Nos.11173008,10974202,and 60978049)the National Key Scientific and Research Equipment Development Project of China(Grant No.ZDYZ2013-2)
文摘For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.
文摘An enhanced small-signal model is introduced to model the influence of the impact ionization effect on the performance of In As/Al Sb HFET, in which an optimized fitting function D(ωτi) in the form of least square approximation is proposed in order to further enhance the accuracy in modeling the frequency dependency of the impact ionization effect.The enhanced model with D(ωτi) can accurately characterize the key S parameters of In As/Al Sb HFET in a wide frequency range with a very low error function EF. It is demonstrated that the new fitting function D(ωτi) is helpful in further improving the modeling accuracy degree.
文摘This paper starts with untime-diversification of the time-diversification deformation model and gives displacement distribution model of untime-diversification and simplifies further the study of deformation model. The paper discusses the problem of least squares fitting of coordinate parameters model—parameters of deformation model. During discussion, the basic means of cubic B splines and two steps of multidimensional disorder datum fitting are adopted which can make fitting function calculated mostly approximate coordinate parameters model and it can make calculation easier.
基金Project(2011467001)supported by the Ministry of Environment Protection of ChinaProject(2010DFB94130)supported by the Ministry of Science and Technology of China
文摘For physical ozone absorption without reaction,two parametric estimation methods,i.e.the common linear least square fitting and non-linear Simplex search methods,were applied,respectively,to determine the ozone mass transfer coefficient during absorption and both methods give almost the same mass transfer coefficient.While for chemical absorption with ozone decomposition reaction,the common linear least square fitting method is not applicable for the evaluation of ozone mass transfer coefficient due to the difficulty of model linearization for describing ozone concentration dissolved in water.The nonlinear Simplex method obtains the mass transfer coefficient by minimizing the sum of the differences between the simulated and experimental ozone concentration during the whole absorption process,without the limitation of linear relationship between the dissolved ozone concentration and absorption time during the initial stage of absorption.Comparison of the ozone concentration profiles between the simulation and experimental data demonstrates that Simplex method may determine ozone mass transfer coefficient during absorption in an accurate and high efficiency way with wide applicability.
文摘Analysis of stock recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.
基金supported by the National Natural Science Foundation of China under Grant No.62273083 and No.61973069Natural Science Foundation of Hebei Province under Grant No.F2020501012。
文摘Wireless sensor network(WSN)positioning has a good effect on indoor positioning,so it has received extensive attention in the field of positioning.Non-line-of sight(NLOS)is a primary challenge in indoor complex environment.In this paper,a robust localization algorithm based on Gaussian mixture model and fitting polynomial is proposed to solve the problem of NLOS error.Firstly,fitting polynomials are used to predict the measured values.The residuals of predicted and measured values are clustered by Gaussian mixture model(GMM).The LOS probability and NLOS probability are calculated according to the clustering centers.The measured values are filtered by Kalman filter(KF),variable parameter unscented Kalman filter(VPUKF)and variable parameter particle filter(VPPF)in turn.The distance value processed by KF and VPUKF and the distance value processed by KF,VPUKF and VPPF are combined according to probability.Finally,the maximum likelihood method is used to calculate the position coordinate estimation.Through simulation comparison,the proposed algorithm has better positioning accuracy than several comparison algorithms in this paper.And it shows strong robustness in strong NLOS environment.
文摘In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.
文摘In factor analysis, a factor loading matrix is often rotated to a simple target matrix for its simplicity. For the purpose, Procrustes rotation minimizes the discrepancy between the target and rotated loadings using two types of approximation: 1) approximate the zeros in the target by the non-zeros in the loadings, and 2) approximate the non-zeros in the target by the non-zeros in the loadings. The central issue of Procrustes rotation considered in the article is that it equally treats the two types of approximation, while the former is more important for simplifying the loading matrix. Furthermore, a well-known issue of Simplimax is the computational inefficiency in estimating the sparse target matrix, which yields a considerable number of local minima. The research proposes a new rotation procedure that consists of the following two stages. The first stage estimates sparse target matrix with lesser computational cost by regularization technique. In the second stage, a loading matrix is rotated to the target, emphasizing on the approximation of non-zeros to zeros in the target by least squares criterion with generalized weighing that is newly proposed by the study. The simulation study and real data examples revealed that the proposed method surely simplifies loading matrices.
基金Supported by the NSF of Hubei Province(2022CFD042)。
文摘This paper proposes a method combining blue the Haar wavelet and the least square to solve the multi-dimensional stochastic Ito-Volterra integral equation.This approach is to transform stochastic integral equations into a system of algebraic equations.Meanwhile,the error analysis is proven.Finally,the effectiveness of the approach is verified by two numerical examples.
基金supported by the National Natural Science Foundation of China(6177202062202433+4 种基金621723716227242262036010)the Natural Science Foundation of Henan Province(22100002)the Postdoctoral Research Grant in Henan Province(202103111)。
文摘Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsity.Therefore,it is difficult for LSPTSVM to process large-scale datasets with outliers.In this paper,we propose a robust LSPTSVM model(called R-LSPTSVM)by applying truncated least squares loss function.The robustness of R-LSPTSVM is proved from a weighted perspective.Furthermore,we obtain the sparse solution of R-LSPTSVM by using the pivoting Cholesky factorization method in primal space.Finally,the sparse R-LSPTSVM algorithm(SR-LSPTSVM)is proposed.Experimental results show that SR-LSPTSVM is insensitive to outliers and can deal with large-scale datasets fastly.