By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expressi...By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expression between the volume and quantity in scientific experiments and engineering practice, this article analyzed data error by commonly linear data fitting method, and proposed improved process of the least distance squ^re method based on least squares method. Finally, the paper discussed the advantages and disadvantages through the example analysis of two kinds of linear data fitting method, and given reasonable control conditions for its application.展开更多
Hepatitis B is an infectious disease worthy of attention.Considering the incubation period,psychological inhibition factor,vaccine,limited medical resources and horizontal transmission,an SIRS model is proposed to des...Hepatitis B is an infectious disease worthy of attention.Considering the incubation period,psychological inhibition factor,vaccine,limited medical resources and horizontal transmission,an SIRS model is proposed to describe hepatitis B transmission dynamics.In order to describe the behavior changes caused by people's psychological changes,the non-monotonic incidence rate is adopted in the model.We use the saturated treatment rate to describe the limited medical resources.Mathematical analysis shows the existence conditions of the equilibria,forward or backward bifurcation,Hopf bifurcation and the Bogdanov-Takens bifurcation.During the observation of the case data of hepatitis B in China,it is found that there are mainly three features,periodic outbreaks,aperiodic outbreaks,and periodic outbreaks turns to aperiodic outbreaks.According to the above features,we select three different representative regions,Jiangxi,Zhejiang province and Beijing,and then use our model to fit the actual monthly hepatitis B case data.The basic reproduction numbers that we estimated are 1.7712,1.4805 and 1.4132,respectively.The results of data fitting are consistent with those of theoretical analysis.According to the sensitivity analysis of Ro,we conclude that reducing contact,increasing treatment rate,strengthening vaccination and revaccinating can effectively prevent and control the prevalence of hepatitis B.展开更多
We propose a new reconstruction scheme for the backward heat conduction problem. By using the eigenfunction expansions, this ill-posed problem is solved by an optimization problem, which is essentially a regularizing ...We propose a new reconstruction scheme for the backward heat conduction problem. By using the eigenfunction expansions, this ill-posed problem is solved by an optimization problem, which is essentially a regularizing scheme for the noisy input data with both the number of truncation terms and the approximation accuracy for the final data as multiple regularizing parameters. The convergence rate analysis depending on the strategy of choosing regularizing parameters as well as the computational accuracy of eigenfunctions is given. Numerical implementations are presented to show the validity of this new scheme.展开更多
Given a set of scattered data with derivative values. If the data is noisy or there is an extremely large number of data, we use an extension of the penalized least squares method of von Golitschek and Schumaker [Serd...Given a set of scattered data with derivative values. If the data is noisy or there is an extremely large number of data, we use an extension of the penalized least squares method of von Golitschek and Schumaker [Serdica, 18 (2002), pp.1001-1020] to fit the data. We show that the extension of the penalized least squares method produces a unique spline to fit the data. Also we give the error bound for the extension method. Some numerical examples are presented to demonstrate the effectiveness of the proposed method.展开更多
With the observation of a series of ground-based laser interferometer gravitational wave(GW)detectors such as LIGO and Virgo,nearly 100 GW events have been detected successively.At present,all detected GW events are g...With the observation of a series of ground-based laser interferometer gravitational wave(GW)detectors such as LIGO and Virgo,nearly 100 GW events have been detected successively.At present,all detected GW events are generated by the mergers of compact binary systems and are identified through the data processing of matched filtering.Based on matched filtering,we use the GW waveform of the Newtonian approximate(NA)model constructed by linearized theory to match the events detected by LIGO and injections to determine the coalescence time and utilize the frequency curve for data fitting to estimate the parameters of the chirp masses of binary black holes(BBHs).The average chirp mass of our results is 22.05_(-6.31)^(+6.31)M_(⊙),which is very close to 23.80_(-3.52)^(+4.83)M_(⊙)provided by GWOSC.In the process,we can analyze LIGO GW events and estimate the chirp masses of the BBHs.This work presents the feasibility and accuracy of the low-order approximate model and data fitting in the application of GW data processing.It is beneficial for further data processing and has certain research value for the preliminary application of GW data.展开更多
As the basic protective element, steel plate had attracted world-wide attention because of frequent threats of explosive loads. This paper reports the relationships between microscopic defects of Q345 steel plate unde...As the basic protective element, steel plate had attracted world-wide attention because of frequent threats of explosive loads. This paper reports the relationships between microscopic defects of Q345 steel plate under the explosive load and its macroscopic dynamics simulation. Firstly, the defect characteristics of the steel plate were investigated by stereoscopic microscope(SM) and scanning electron microscope(SEM). At the macroscopic level, the defect was the formation of cave which was concentrated in the range of 0-3.0 cm from the explosion center, while at the microscopic level, the cavity and void formation were the typical damage characteristics. It also explains that the difference in defect morphology at different positions was the combining results of high temperature and high pressure. Secondly, the variation rules of mechanical properties of steel plate under explosive load were studied. The Arbitrary Lagrange-Euler(ALE) algorithm and multi-material fluid-structure coupling method were used to simulate the explosion process of steel plate. The accuracy of the method was verified by comparing the deformation of the simulation results with the experimental results, the pressure and stress at different positions on the surface of the steel plate were obtained. The simulation results indicated that the critical pressure causing the plate defects may be approximately 2.01 GPa. On this basis, it was found that the variation rules of surface pressure and microscopic defect area of the Q345 steel plate were strikingly similar, and the corresponding mathematical relationship between them was established. Compared with Monomolecular growth fitting models(MGFM) and Logistic fitting models(LFM), the relationship can be better expressed by cubic polynomial fitting model(CPFM). This paper illustrated that the explosive defect characteristics of metal plate at the microscopic level can be explored by analyzing its macroscopic dynamic mechanical response.展开更多
The fitting of lifetime distribution in real-life data has been studied in various fields of research. With the theory of evolution still applicable, more complex data from real-world scenarios will continue to emerge...The fitting of lifetime distribution in real-life data has been studied in various fields of research. With the theory of evolution still applicable, more complex data from real-world scenarios will continue to emerge. Despite this, many researchers have made commendable efforts to develop new lifetime distributions that can fit this complex data. In this paper, we utilized the KM-transformation technique to increase the flexibility of the power Lindley distribution, resulting in the Kavya-Manoharan Power Lindley (KMPL) distribution. We study the mathematical treatments of the KMPL distribution in detail and adapt the widely used method of maximum likelihood to estimate the unknown parameters of the KMPL distribution. We carry out a Monte Carlo simulation study to investigate the performance of the Maximum Likelihood Estimates (MLEs) of the parameters of the KMPL distribution. To demonstrate the effectiveness of the KMPL distribution for data fitting, we use a real dataset comprising the waiting time of 100 bank customers. We compare the KMPL distribution with other models that are extensions of the power Lindley distribution. Based on some statistical model selection criteria, the summary results of the analysis were in favor of the KMPL distribution. We further investigate the density fit and probability-probability (p-p) plots to validate the superiority of the KMPL distribution over the competing distributions for fitting the waiting time dataset.展开更多
Based on the definition of MQ-B-Splines,this article constructs five types of univariate quasi-interpolants to non-uniformly distributed data. The error estimates and the shape-preserving properties are shown in detai...Based on the definition of MQ-B-Splines,this article constructs five types of univariate quasi-interpolants to non-uniformly distributed data. The error estimates and the shape-preserving properties are shown in details.And examples are shown to demonstrate the capacity of the quasi-interpolants for curve representation.展开更多
The state estimation of a maneuvering target,of which the trajectory shape is independent on dynamic characteristics,is studied.The conventional motion models in Cartesian coordinates imply that the trajectory of a ta...The state estimation of a maneuvering target,of which the trajectory shape is independent on dynamic characteristics,is studied.The conventional motion models in Cartesian coordinates imply that the trajectory of a target is completely determined by its dynamic characteristics.However,this is not true in the applications of road-target,sea-route-target or flight route-target tracking,where target trajectory shape is uncoupled with target velocity properties.In this paper,a new estimation algorithm based on separate modeling of target trajectory shape and dynamic characteristics is proposed.The trajectory of a target over a sliding window is described by a linear function of the arc length.To determine the unknown target trajectory,an augmented system is derived by denoting the unknown coefficients of the function as states in mileage coordinates.At every estimation cycle except the first one,the interaction(mixing)stage of the proposed algorithm starts from the latest estimated base state and a recalculated parameter vector,which is determined by the least squares(LS).Numerical experiments are conducted to assess the performance of the proposed algorithm.Simulation results show that the proposed algorithm can achieve better performance than the conventional coupled model-based algorithms in the presence of target maneuvers.展开更多
In working state, the dynamic performance of dry gas seal, generated by the rotating end face with spiral grooves, is determined by the open force of gas film and leakage flow rate. Generally, the open force and the l...In working state, the dynamic performance of dry gas seal, generated by the rotating end face with spiral grooves, is determined by the open force of gas film and leakage flow rate. Generally, the open force and the leakage flow rate can be obtained by finite element method, computational fluid dynamics method and experimental measurement method. However, it will take much time to carry out the above measurements and calculations. In this paper, the approximate model of parallel grooves based on the narrow groove theory is used to establish the dynamic equations of the gas film for the purpose of obtaining the dynamic parameters of gas film. The nonlinear differential equations of gas film model are solved by Runge-Kutta method and shooting method. The numerical values of the pressure profiles, leakage flux and opening force on the seal surface are integrated, and then compared to experimental data for the reliability of the numerical simulation. The results show that the numerical simulation curves are in good agreement with experimental values. Furthermore, the opening force and the leakage flux are proved to be strongly correlated with the operating parameters. Then, the function-coupling method is introduced to analyze the numerical results to obtain the correlation formulae of the opening force and leakage flux respectively with the operating parameters, i.e., the inlet pressure and the rotating speed. This study intends to provide an effective way to predict the aerodynamic performance for designing and optimizing the groove styles in dry gas seal rapidly and accurately.展开更多
Spherical indentations that rely on original date are analyzed with the physically correct mathematical formula and its integration that take into account the radius over depth changes upon penetration. Linear plots, ...Spherical indentations that rely on original date are analyzed with the physically correct mathematical formula and its integration that take into account the radius over depth changes upon penetration. Linear plots, phase-transition onsets, energies, and pressures are algebraically obtained for germanium, zinc-oxide and gallium-nitride. There are low pressure phase-transitions that correspond to, or are not resolved by hydrostatic anvil onset pressures. This enables the attribution of polymorph structures, by comparing with known structures from pulsed laser deposition or molecular beam epitaxy and twinning. The spherical indentation is the easiest way for the synthesis and further characterization of polymorphs, now available in pure form under diamond calotte and in contact with their corresponding less dense polymorph. The unprecedented results and new possibilities require loading curves from experimental data. These are now easily distinguished from data that are “fitted” to make them concur with widely used unphysical Johnson’s formula for spheres (“<span style="white-space:nowrap;"><em>P</em> = (4/3)<em>h</em><sup>3/2</sup><em>R</em><sup>1/2</sup><em>E</em><sup><span style="white-space:nowrap;">∗</span></sup></span>”) not taking care of the <em>R/h</em> variation. Its challenge is indispensable, because its use involves “fitting equations” for making the data concur. These faked reports (no “experimental” data) provide dangerous false moduli and theories. The fitted spherical indentation reports with radii ranging from 4 to 250 μm are identified for PDMS, GaAs, Al, Si, SiC, MgO, and Steel. The detailed analysis reveals characteristic features.展开更多
Based on an analysis of 280 Type SNIa supernovae and gamma-ray bursts redshifts in the range of z = 0.0104 - 8.1 the Hubble diagram is shown to follow a strictly exponential slope predicting an exponentially expanding...Based on an analysis of 280 Type SNIa supernovae and gamma-ray bursts redshifts in the range of z = 0.0104 - 8.1 the Hubble diagram is shown to follow a strictly exponential slope predicting an exponentially expanding or static universe. At redshifts > 2 - 3 ΛCDM models show a poor agreement with the observed data. Based on the results presented in this paper, the Hubble diagram test does not necessarily support the idea of expansion according to the big-bang concordance model.展开更多
In experimental tests, besides data in range of allowable error, the experimenters usually get some unexpected wrong data called bad points. In usual experimental data processing, the method of bad points exclusion ba...In experimental tests, besides data in range of allowable error, the experimenters usually get some unexpected wrong data called bad points. In usual experimental data processing, the method of bad points exclusion based on automatic programming is seldom taken into consideration by researchers. This paper presents a new method to reject bad points based on Hough transform, which is modified to save computational and memory consumptions. It is fit for linear data processing and can be extended to process data that is possible to be transformed into and from linear form; curved lines, which can be effectively detected by Hough transform. In this paper, the premise is the distribution of data, such as linear distribution and exponential distribution, is predetermined. Steps of the algorithm start from searching for an approximate curve line that minimizes the sum of parameters of data points. The data points, whose parameters are above a self-adapting threshold, will be deleted. Simulation experiments have manifested that the method proposed in this paper performs efficiently and robustly.展开更多
The navigation software uses the positioning system to determine the traffic conditions of the road sections in advance,so as to predict the travel time of the road sections.However,in the case of traffic congestion,t...The navigation software uses the positioning system to determine the traffic conditions of the road sections in advance,so as to predict the travel time of the road sections.However,in the case of traffic congestion,the accuracy of its prediction time is low.After empirical analysis,this paper establishes a multi-factor synthesis by studying 7 factors:traffic flow,number of stops,traffic light duration,road network density,average speed,road area,and number of intersections the prediction function achieves the purpose of accurately predicting the transit time of congested road sections.The gray correlation coefficients of the seven factors obtained from the gray correlation analysis are:0.9827,0.9679,0.6747,0.8030,0.9445,0.8759,0.4328.The correlation coefficients of traffic volume,number of stops,average speed,and road congestion delay time were all about 95%,which were the main influencing factors of the study.The prediction needs to be based on functions.This paper fits the main influencing factors to the delay time of congested roads.It is found that the delay time varies parabolically with the traffic flow and the number of stops,and linearly with the average speed.Because the three impact factors have different weights on the delay time of congested roads,demand takes the weight of each factor.Therefore,the gray correlation coefficients occupied by the main influencing factors are normalized to obtain the weights of three of 0.340,0.334,and 0.326.The weighted fitting function is subjected to nonlinear summation processing to obtain a multi-factor comprehensive prediction function.By comparing the original data with the fitting data and calculating the accuracy of the fitting function,it is found that the accuracy of each fitting function is close to 0,the residual error,the relative error is small,and the accuracy is high.展开更多
The management of automatic lathe is discussed in this paper. Examining interval and the change policy of cutting-tool is translated to the analysis of an optimization problem forthe purpose of decreasing total econom...The management of automatic lathe is discussed in this paper. Examining interval and the change policy of cutting-tool is translated to the analysis of an optimization problem forthe purpose of decreasing total economic loss. At last, an effective computation example is given.展开更多
The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion ...The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution, whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.展开更多
Some recent developments(accelerated expansion)in the Universe cannot be explained by the conventional formulation of general relativity.We apply the recently proposed f(T,B)gravity to investigate the accelerated expa...Some recent developments(accelerated expansion)in the Universe cannot be explained by the conventional formulation of general relativity.We apply the recently proposed f(T,B)gravity to investigate the accelerated expansion of the Universe.By parametrizing the Hubble parameter and estimating the best fit values of the model parameters b_(0),b_(1),and b_(2)imposed from Supernovae type la,Cosmic Microwave Background,B aryon Acoustic Oscillation,and Hubble data using the Markov Chain Monte Carlo method,we propose a method to determine the precise solutions to the field equations.We then observe that the model appears to be in good agreement with the observations.A change from the deceleration to the acceleration phase of the Universe is shown by the evolution of the deceleration parameter.In addition,we investigate the behavior of the statefinder analysis,equation of state(EoS)parameters,along with the energy conditions.Furthermore,to discuss other cosmological parameters,we consider some wellknown f(T,B)gravity models,specifically,f(T,B)=aT^(b)+cB^(d).Lastly,we find that the considered f(T,B)gravity models predict that the present Universe is accelerating and the EoS parameter behaves like the ACDM model.展开更多
With regards to Shepard method, in the paper, we present a better one based on partial approximation to fit messy data. In the method, partial cubic cardinal spline function is chosen as weight function (?)(x) in the ...With regards to Shepard method, in the paper, we present a better one based on partial approximation to fit messy data. In the method, partial cubic cardinal spline function is chosen as weight function (?)(x) in the Shepard formula which is (?)(x)∈C2 and has good attenuation characteristics. So the traditional Shepard method is improved and the better results can be achieved in practical applications.展开更多
A compartmental,epidemiological,mathematical model was developed in order to analyze the transmission dynamics of Delta and Omicron variant,of SARS-CoV-2,in Greece.The model was parameterized twice during the 4th and ...A compartmental,epidemiological,mathematical model was developed in order to analyze the transmission dynamics of Delta and Omicron variant,of SARS-CoV-2,in Greece.The model was parameterized twice during the 4th and 5th wave of the pandemic.The 4th wave refers to the period during which the Delta variant was dominant(approximately July to December of 2021)and the 5th wave to the period during which the Omicron variant was dominant(approximately January to May of 2022),in accordance with the official data from the National Public Health Organization(NPHO).Fitting methods were applied to evaluate important parameters in connection with the transmission of the variants,as well as the social behavior of population during these periods of interest.Mathematical models revealed higher numbers of contagiousness and cases of asymptomatic disease during the Omicron variant period,but a decreased rate of hospitalization compared to the Delta period.Also,parameters related to the behavior of the population in Greece were also assessed.More specifically,the use of protective masks and the abidance of social distancing measures.Simulations revealed that over 5,000 deaths could have been avoided,if mask usage and social distancing were 20%more efficient,during the short period of the Delta and Omicron outbreak.Furthermore,the spread of the variants was assessed using viral load data.The data were recorded from PCR tests at 417 Army Equity Fund Hospital(NIMTS),in Athens and the Ct values from 746 patients with COVID-19 were processed,to explain transmission phenomena and disease severity in patients.The period when the Delta variant prevailed in the country,the average Ct value was calculated as 25.19(range:12.32e39.29),whereas during the period when the Omicron variant prevailed,the average Ct value was calculated as 28(range:14.41e39.36).In conclusion,our experimental study showed that the higher viral load,which is related to the Delta variant,may interpret the severity of the disease.However,no correlation was confirmed regarding contagiousness phenomena.The results of the model,Ct analysis and official data from NPHO are consistent.展开更多
文摘By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expression between the volume and quantity in scientific experiments and engineering practice, this article analyzed data error by commonly linear data fitting method, and proposed improved process of the least distance squ^re method based on least squares method. Finally, the paper discussed the advantages and disadvantages through the example analysis of two kinds of linear data fitting method, and given reasonable control conditions for its application.
文摘Hepatitis B is an infectious disease worthy of attention.Considering the incubation period,psychological inhibition factor,vaccine,limited medical resources and horizontal transmission,an SIRS model is proposed to describe hepatitis B transmission dynamics.In order to describe the behavior changes caused by people's psychological changes,the non-monotonic incidence rate is adopted in the model.We use the saturated treatment rate to describe the limited medical resources.Mathematical analysis shows the existence conditions of the equilibria,forward or backward bifurcation,Hopf bifurcation and the Bogdanov-Takens bifurcation.During the observation of the case data of hepatitis B in China,it is found that there are mainly three features,periodic outbreaks,aperiodic outbreaks,and periodic outbreaks turns to aperiodic outbreaks.According to the above features,we select three different representative regions,Jiangxi,Zhejiang province and Beijing,and then use our model to fit the actual monthly hepatitis B case data.The basic reproduction numbers that we estimated are 1.7712,1.4805 and 1.4132,respectively.The results of data fitting are consistent with those of theoretical analysis.According to the sensitivity analysis of Ro,we conclude that reducing contact,increasing treatment rate,strengthening vaccination and revaccinating can effectively prevent and control the prevalence of hepatitis B.
基金Acknowledgments. This work is supported by NSFC (No.11071039) and Natural Science Foundation of Jiangsu Province (No.BK2011584).
文摘We propose a new reconstruction scheme for the backward heat conduction problem. By using the eigenfunction expansions, this ill-posed problem is solved by an optimization problem, which is essentially a regularizing scheme for the noisy input data with both the number of truncation terms and the approximation accuracy for the final data as multiple regularizing parameters. The convergence rate analysis depending on the strategy of choosing regularizing parameters as well as the computational accuracy of eigenfunctions is given. Numerical implementations are presented to show the validity of this new scheme.
基金supported by Science Foundation of Zhejiang Sci-Tech University(ZSTU) under Grant No.0813826-Y
文摘Given a set of scattered data with derivative values. If the data is noisy or there is an extremely large number of data, we use an extension of the penalized least squares method of von Golitschek and Schumaker [Serdica, 18 (2002), pp.1001-1020] to fit the data. We show that the extension of the penalized least squares method produces a unique spline to fit the data. Also we give the error bound for the extension method. Some numerical examples are presented to demonstrate the effectiveness of the proposed method.
基金the National Key Research and Development Program of China(Grant No.2021YFC2203004)the National Natural Science Foundation of China(Grant No.12147102)the Sichuan Youth Science and Technology Innovation Research Team(Grant No.21CXTD0038)。
文摘With the observation of a series of ground-based laser interferometer gravitational wave(GW)detectors such as LIGO and Virgo,nearly 100 GW events have been detected successively.At present,all detected GW events are generated by the mergers of compact binary systems and are identified through the data processing of matched filtering.Based on matched filtering,we use the GW waveform of the Newtonian approximate(NA)model constructed by linearized theory to match the events detected by LIGO and injections to determine the coalescence time and utilize the frequency curve for data fitting to estimate the parameters of the chirp masses of binary black holes(BBHs).The average chirp mass of our results is 22.05_(-6.31)^(+6.31)M_(⊙),which is very close to 23.80_(-3.52)^(+4.83)M_(⊙)provided by GWOSC.In the process,we can analyze LIGO GW events and estimate the chirp masses of the BBHs.This work presents the feasibility and accuracy of the low-order approximate model and data fitting in the application of GW data processing.It is beneficial for further data processing and has certain research value for the preliminary application of GW data.
基金Science and Technology Project of Fire Rescue Bureau of Ministry of Emergency Management(Grant No.2022XFZD05)S&T Program of Hebei(Grant No.22375419D)National Natural Science Foundation of China(Grant No.11802160).
文摘As the basic protective element, steel plate had attracted world-wide attention because of frequent threats of explosive loads. This paper reports the relationships between microscopic defects of Q345 steel plate under the explosive load and its macroscopic dynamics simulation. Firstly, the defect characteristics of the steel plate were investigated by stereoscopic microscope(SM) and scanning electron microscope(SEM). At the macroscopic level, the defect was the formation of cave which was concentrated in the range of 0-3.0 cm from the explosion center, while at the microscopic level, the cavity and void formation were the typical damage characteristics. It also explains that the difference in defect morphology at different positions was the combining results of high temperature and high pressure. Secondly, the variation rules of mechanical properties of steel plate under explosive load were studied. The Arbitrary Lagrange-Euler(ALE) algorithm and multi-material fluid-structure coupling method were used to simulate the explosion process of steel plate. The accuracy of the method was verified by comparing the deformation of the simulation results with the experimental results, the pressure and stress at different positions on the surface of the steel plate were obtained. The simulation results indicated that the critical pressure causing the plate defects may be approximately 2.01 GPa. On this basis, it was found that the variation rules of surface pressure and microscopic defect area of the Q345 steel plate were strikingly similar, and the corresponding mathematical relationship between them was established. Compared with Monomolecular growth fitting models(MGFM) and Logistic fitting models(LFM), the relationship can be better expressed by cubic polynomial fitting model(CPFM). This paper illustrated that the explosive defect characteristics of metal plate at the microscopic level can be explored by analyzing its macroscopic dynamic mechanical response.
文摘The fitting of lifetime distribution in real-life data has been studied in various fields of research. With the theory of evolution still applicable, more complex data from real-world scenarios will continue to emerge. Despite this, many researchers have made commendable efforts to develop new lifetime distributions that can fit this complex data. In this paper, we utilized the KM-transformation technique to increase the flexibility of the power Lindley distribution, resulting in the Kavya-Manoharan Power Lindley (KMPL) distribution. We study the mathematical treatments of the KMPL distribution in detail and adapt the widely used method of maximum likelihood to estimate the unknown parameters of the KMPL distribution. We carry out a Monte Carlo simulation study to investigate the performance of the Maximum Likelihood Estimates (MLEs) of the parameters of the KMPL distribution. To demonstrate the effectiveness of the KMPL distribution for data fitting, we use a real dataset comprising the waiting time of 100 bank customers. We compare the KMPL distribution with other models that are extensions of the power Lindley distribution. Based on some statistical model selection criteria, the summary results of the analysis were in favor of the KMPL distribution. We further investigate the density fit and probability-probability (p-p) plots to validate the superiority of the KMPL distribution over the competing distributions for fitting the waiting time dataset.
基金Supported by the National Natural Science Foundation of China( 1 9971 0 1 7,1 0 1 2 5 1 0 2 )
文摘Based on the definition of MQ-B-Splines,this article constructs five types of univariate quasi-interpolants to non-uniformly distributed data. The error estimates and the shape-preserving properties are shown in details.And examples are shown to demonstrate the capacity of the quasi-interpolants for curve representation.
基金supported by the National Natural Science Foundation of China(61671181).
文摘The state estimation of a maneuvering target,of which the trajectory shape is independent on dynamic characteristics,is studied.The conventional motion models in Cartesian coordinates imply that the trajectory of a target is completely determined by its dynamic characteristics.However,this is not true in the applications of road-target,sea-route-target or flight route-target tracking,where target trajectory shape is uncoupled with target velocity properties.In this paper,a new estimation algorithm based on separate modeling of target trajectory shape and dynamic characteristics is proposed.The trajectory of a target over a sliding window is described by a linear function of the arc length.To determine the unknown target trajectory,an augmented system is derived by denoting the unknown coefficients of the function as states in mileage coordinates.At every estimation cycle except the first one,the interaction(mixing)stage of the proposed algorithm starts from the latest estimated base state and a recalculated parameter vector,which is determined by the least squares(LS).Numerical experiments are conducted to assess the performance of the proposed algorithm.Simulation results show that the proposed algorithm can achieve better performance than the conventional coupled model-based algorithms in the presence of target maneuvers.
基金Supported by National Natural Science Foundation of China(Grant No.51276125)National Key Basic Research Development Program of China(973 Program,Grant No.2012CB720101)
文摘In working state, the dynamic performance of dry gas seal, generated by the rotating end face with spiral grooves, is determined by the open force of gas film and leakage flow rate. Generally, the open force and the leakage flow rate can be obtained by finite element method, computational fluid dynamics method and experimental measurement method. However, it will take much time to carry out the above measurements and calculations. In this paper, the approximate model of parallel grooves based on the narrow groove theory is used to establish the dynamic equations of the gas film for the purpose of obtaining the dynamic parameters of gas film. The nonlinear differential equations of gas film model are solved by Runge-Kutta method and shooting method. The numerical values of the pressure profiles, leakage flux and opening force on the seal surface are integrated, and then compared to experimental data for the reliability of the numerical simulation. The results show that the numerical simulation curves are in good agreement with experimental values. Furthermore, the opening force and the leakage flux are proved to be strongly correlated with the operating parameters. Then, the function-coupling method is introduced to analyze the numerical results to obtain the correlation formulae of the opening force and leakage flux respectively with the operating parameters, i.e., the inlet pressure and the rotating speed. This study intends to provide an effective way to predict the aerodynamic performance for designing and optimizing the groove styles in dry gas seal rapidly and accurately.
文摘Spherical indentations that rely on original date are analyzed with the physically correct mathematical formula and its integration that take into account the radius over depth changes upon penetration. Linear plots, phase-transition onsets, energies, and pressures are algebraically obtained for germanium, zinc-oxide and gallium-nitride. There are low pressure phase-transitions that correspond to, or are not resolved by hydrostatic anvil onset pressures. This enables the attribution of polymorph structures, by comparing with known structures from pulsed laser deposition or molecular beam epitaxy and twinning. The spherical indentation is the easiest way for the synthesis and further characterization of polymorphs, now available in pure form under diamond calotte and in contact with their corresponding less dense polymorph. The unprecedented results and new possibilities require loading curves from experimental data. These are now easily distinguished from data that are “fitted” to make them concur with widely used unphysical Johnson’s formula for spheres (“<span style="white-space:nowrap;"><em>P</em> = (4/3)<em>h</em><sup>3/2</sup><em>R</em><sup>1/2</sup><em>E</em><sup><span style="white-space:nowrap;">∗</span></sup></span>”) not taking care of the <em>R/h</em> variation. Its challenge is indispensable, because its use involves “fitting equations” for making the data concur. These faked reports (no “experimental” data) provide dangerous false moduli and theories. The fitted spherical indentation reports with radii ranging from 4 to 250 μm are identified for PDMS, GaAs, Al, Si, SiC, MgO, and Steel. The detailed analysis reveals characteristic features.
文摘Based on an analysis of 280 Type SNIa supernovae and gamma-ray bursts redshifts in the range of z = 0.0104 - 8.1 the Hubble diagram is shown to follow a strictly exponential slope predicting an exponentially expanding or static universe. At redshifts > 2 - 3 ΛCDM models show a poor agreement with the observed data. Based on the results presented in this paper, the Hubble diagram test does not necessarily support the idea of expansion according to the big-bang concordance model.
文摘In experimental tests, besides data in range of allowable error, the experimenters usually get some unexpected wrong data called bad points. In usual experimental data processing, the method of bad points exclusion based on automatic programming is seldom taken into consideration by researchers. This paper presents a new method to reject bad points based on Hough transform, which is modified to save computational and memory consumptions. It is fit for linear data processing and can be extended to process data that is possible to be transformed into and from linear form; curved lines, which can be effectively detected by Hough transform. In this paper, the premise is the distribution of data, such as linear distribution and exponential distribution, is predetermined. Steps of the algorithm start from searching for an approximate curve line that minimizes the sum of parameters of data points. The data points, whose parameters are above a self-adapting threshold, will be deleted. Simulation experiments have manifested that the method proposed in this paper performs efficiently and robustly.
文摘The navigation software uses the positioning system to determine the traffic conditions of the road sections in advance,so as to predict the travel time of the road sections.However,in the case of traffic congestion,the accuracy of its prediction time is low.After empirical analysis,this paper establishes a multi-factor synthesis by studying 7 factors:traffic flow,number of stops,traffic light duration,road network density,average speed,road area,and number of intersections the prediction function achieves the purpose of accurately predicting the transit time of congested road sections.The gray correlation coefficients of the seven factors obtained from the gray correlation analysis are:0.9827,0.9679,0.6747,0.8030,0.9445,0.8759,0.4328.The correlation coefficients of traffic volume,number of stops,average speed,and road congestion delay time were all about 95%,which were the main influencing factors of the study.The prediction needs to be based on functions.This paper fits the main influencing factors to the delay time of congested roads.It is found that the delay time varies parabolically with the traffic flow and the number of stops,and linearly with the average speed.Because the three impact factors have different weights on the delay time of congested roads,demand takes the weight of each factor.Therefore,the gray correlation coefficients occupied by the main influencing factors are normalized to obtain the weights of three of 0.340,0.334,and 0.326.The weighted fitting function is subjected to nonlinear summation processing to obtain a multi-factor comprehensive prediction function.By comparing the original data with the fitting data and calculating the accuracy of the fitting function,it is found that the accuracy of each fitting function is close to 0,the residual error,the relative error is small,and the accuracy is high.
文摘The management of automatic lathe is discussed in this paper. Examining interval and the change policy of cutting-tool is translated to the analysis of an optimization problem forthe purpose of decreasing total economic loss. At last, an effective computation example is given.
基金This project is supported by National Natural Science Foundation of China (No.50475117)Municipal Science and Technology Commission of,Tianjin China(No.0431835116).
文摘The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution, whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.
基金the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan provided funding for this study(Grant No.AP09058240)。
文摘Some recent developments(accelerated expansion)in the Universe cannot be explained by the conventional formulation of general relativity.We apply the recently proposed f(T,B)gravity to investigate the accelerated expansion of the Universe.By parametrizing the Hubble parameter and estimating the best fit values of the model parameters b_(0),b_(1),and b_(2)imposed from Supernovae type la,Cosmic Microwave Background,B aryon Acoustic Oscillation,and Hubble data using the Markov Chain Monte Carlo method,we propose a method to determine the precise solutions to the field equations.We then observe that the model appears to be in good agreement with the observations.A change from the deceleration to the acceleration phase of the Universe is shown by the evolution of the deceleration parameter.In addition,we investigate the behavior of the statefinder analysis,equation of state(EoS)parameters,along with the energy conditions.Furthermore,to discuss other cosmological parameters,we consider some wellknown f(T,B)gravity models,specifically,f(T,B)=aT^(b)+cB^(d).Lastly,we find that the considered f(T,B)gravity models predict that the present Universe is accelerating and the EoS parameter behaves like the ACDM model.
文摘With regards to Shepard method, in the paper, we present a better one based on partial approximation to fit messy data. In the method, partial cubic cardinal spline function is chosen as weight function (?)(x) in the Shepard formula which is (?)(x)∈C2 and has good attenuation characteristics. So the traditional Shepard method is improved and the better results can be achieved in practical applications.
文摘A compartmental,epidemiological,mathematical model was developed in order to analyze the transmission dynamics of Delta and Omicron variant,of SARS-CoV-2,in Greece.The model was parameterized twice during the 4th and 5th wave of the pandemic.The 4th wave refers to the period during which the Delta variant was dominant(approximately July to December of 2021)and the 5th wave to the period during which the Omicron variant was dominant(approximately January to May of 2022),in accordance with the official data from the National Public Health Organization(NPHO).Fitting methods were applied to evaluate important parameters in connection with the transmission of the variants,as well as the social behavior of population during these periods of interest.Mathematical models revealed higher numbers of contagiousness and cases of asymptomatic disease during the Omicron variant period,but a decreased rate of hospitalization compared to the Delta period.Also,parameters related to the behavior of the population in Greece were also assessed.More specifically,the use of protective masks and the abidance of social distancing measures.Simulations revealed that over 5,000 deaths could have been avoided,if mask usage and social distancing were 20%more efficient,during the short period of the Delta and Omicron outbreak.Furthermore,the spread of the variants was assessed using viral load data.The data were recorded from PCR tests at 417 Army Equity Fund Hospital(NIMTS),in Athens and the Ct values from 746 patients with COVID-19 were processed,to explain transmission phenomena and disease severity in patients.The period when the Delta variant prevailed in the country,the average Ct value was calculated as 25.19(range:12.32e39.29),whereas during the period when the Omicron variant prevailed,the average Ct value was calculated as 28(range:14.41e39.36).In conclusion,our experimental study showed that the higher viral load,which is related to the Delta variant,may interpret the severity of the disease.However,no correlation was confirmed regarding contagiousness phenomena.The results of the model,Ct analysis and official data from NPHO are consistent.