In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditiona...In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditional Maxinnization (ECM) algorithm to estimate parameters and conduct numerical simulation, and performs fitting analysis on the test scores of Linear Algebra and Advanced Mathematics of F University. The empirical results show that the two-component mixed generalized normal distribution is better than the commonly used two-component mixed normal distribution in fitting college students’ test data, and has good application value.展开更多
The intricate distribution of oil and water in tight rocks makes pinpointing oil layers challenging.While conventional identification methods offer potential solutions,their limited accuracy precludes them from being ...The intricate distribution of oil and water in tight rocks makes pinpointing oil layers challenging.While conventional identification methods offer potential solutions,their limited accuracy precludes them from being effective in their applications to unconventional reservoirs.This study employed nuclear magnetic resonance(NMR)spectrum decomposition to dissect the NMR T_(2)spectrum into multiple subspectra.Furthermore,it employed laboratory NMR experiments to ascertain the fluid properties of these sub-spectra,aiming to enhance identification accuracy.The findings indicate that fluids of distinct properties overlap in the T_(2)spectra,with bound water,movable water,bound oil,and movable oil appearing sequentially from the low-value zone to the high-value zone.Consequently,an oil layer classification scheme was proposed,which considers the physical properties of reservoirs,oil-bearing capacity,and the characteristics of both mobility and the oil-water two-phase flow.When applied to tight oil layer identification,the scheme's outcomes align closely with actual test results.A horizontal well,deployed based on these findings,has produced high-yield industrial oil flow,underscoring the precision and dependability of this new approach.展开更多
Purpose–In response to the problem of insufficient traction/braking adhesion force caused by the existence of the third-body medium on the rail surface,this study aims to analyze the utilization of wheel-rail adhesio...Purpose–In response to the problem of insufficient traction/braking adhesion force caused by the existence of the third-body medium on the rail surface,this study aims to analyze the utilization of wheel-rail adhesion coefficient under different medium conditions and propose relevant measures for reasonable and optimized utilization of adhesion to ensure the traction/braking performance and operation safety of trains.Design/methodology/approach–Based on the PLS-160 wheel-rail adhesion simulation test rig,the study investigates the variation patterns of maximum utilized adhesion characteristics on the rail surface under different conditions of small creepage and large slip.Through statistical analysis of multiple sets of experimental data,the statistical distribution patterns of maximum utilized adhesion on the rail surface are obtained,and a method for analyzing wheel-rail adhesion redundancy based on normal distribution is proposed.The study analyzes the utilization of traction/braking adhesion,as well as adhesion redundancy,for different medium under small creepage and large slip conditions.Based on these findings,relevant measures for the reasonable and optimized utilization of adhesion are derived.Findings–When the third-body medium exists on the rail surface,the train should adopt the low-level service braking to avoid the braking skidding by extending the braking distance.Compared with the current adhesion control strategy of small creepage,adopting appropriate strategies to control the train’s adhesion coefficient near the second peak point of the adhesion coefficient-slip ratio curve in large slip can effectively improve the traction/braking adhesion redundancy and the upper limit of adhesion utilization,thereby ensuring the traction/braking performance and operation safety of the train.Originality/value–Most existing studies focus on the wheel-rail adhesion coefficient values and variation patterns under different medium conditions,without considering whether the rail surface with different medium can provide sufficient traction/braking utilized adhesion coefficient for the train.Therefore,there is a risk of traction overspeeding/braking skidding.This study analyzes whether the rail surface with different medium can provide sufficient traction/braking utilized adhesion coefficient for the train and whether there is redundancy.Based on these findings,relevant measures for the reasonable and optimized utilization of adhesion are derived to further ensure operation safety of the train.展开更多
In order to reveal the complex network characteristics and evolution principle of China aviation network, the probability distribution and evolution trace of average degree of edge vertices of China aviation network w...In order to reveal the complex network characteristics and evolution principle of China aviation network, the probability distribution and evolution trace of average degree of edge vertices of China aviation network were studied based on the statistics data of China civil aviation network in 1988, 1994, 2001, 2008 and 2015. According to the theory and method of complex network, the network system was constructed with the city where the airport was located as the network node and the route between cities as the edge of the network. Based on the statistical data, the average degrees of edge vertices in China aviation network in 1988, 1994, 2001, 2008 and 2015 were calculated. Using the probability statistical analysis method and regression analysis approach, it was found that the average degree of edge vertices had the probability distribution of normal function and the position parameters and scale parameters of the probability distribution had linear evolution trace.展开更多
Firstly, using the damage model for rock based on Lemaitre hypothesis about strain equivalence, a new technique for measuring strength of rock micro-cells by adopting the Mohr-Coulomb criterion was developed, and a st...Firstly, using the damage model for rock based on Lemaitre hypothesis about strain equivalence, a new technique for measuring strength of rock micro-cells by adopting the Mohr-Coulomb criterion was developed, and a statistical damage evolution equation was established based on the property that strength of micro-cells is consistent with normal distribution function, through discussing the characteristics of random distributions for strength of micro-cells, then a statistical damage constitutive model that can simulate the full process of rock strain softening under specific confining pressure was set up. Secondly, a new method to determine the model parameters which can be applied to the situations under different confining pressures was proposed, by deeply studying the relations between the model parameters and characteristic parameters of the full stress-strain curve under different confining pressures. Therefore, a unified statistical damage constitutive model for rock softening which can reflect the effect of different confining pressures was set up. This model makes the physical property of model parameters explicit, contains only conventional mechanical parameters, and leads its application more convenient. Finally, the rationality of this model and its parameters-determining method were identified via comparative analyses between theoretical and experimental curves.展开更多
This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is ...This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.展开更多
The core technology in an intelligent video surveillance system is that detecting and recognizing abnormal behaviors timely and accurately.The key breakthrough point in recognizing abnormal behaviors is how to obtain ...The core technology in an intelligent video surveillance system is that detecting and recognizing abnormal behaviors timely and accurately.The key breakthrough point in recognizing abnormal behaviors is how to obtain the effective features of the picture,so as to solve the problem of recognizing them.In response to this difficulty,this paper introduces an adjustable jump link coefficients model based on the residual network.The effective coefficients for each layer of the network can be set after using this model to further improving the recognition accuracy of abnormal behavior.A convolution kernel of 1×1 size is added to reduce the number of parameters for the purpose of improving the speed of the model in this paper.In order to reduce the noise of the data edge,and at the same time,improve the accuracy of the data and speed up the training,a BN(Batch Normalization)layer is added before the activation function in this network.This paper trains this network model on the public ImageNet dataset,and then uses the transfer learning method to recognize these abnormal behaviors of human in the UTI behavior dataset processed by the YOLO_v3 target detection network.Under the same experimental conditions,compared with the original ResNet-50 model,the improved model in this paper has a 2.8%higher accuracy in recognition of abnormal behaviors on the public UTI dataset.展开更多
The estimation of the functionθ=exp{αμ+bσ2} of parameters (μ,σ2) in normal distribution N(μ,σ2) is discussed. And when the prior distributions ofμandσ2 are independent, under the loss function L(θ,δ)=(θ-1...The estimation of the functionθ=exp{αμ+bσ2} of parameters (μ,σ2) in normal distribution N(μ,σ2) is discussed. And when the prior distributions ofμandσ2 are independent, under the loss function L(θ,δ)=(θ-1×δ-1)2, the Bayesian estimation and the existence and computing method on minimax estimation are deeply discussed.展开更多
Joint location and scale models of the skew-normal distribution provide useful ex- tension for joint mean and variance models of the normal distribution when the data set under consideration involves asymmetric outcom...Joint location and scale models of the skew-normal distribution provide useful ex- tension for joint mean and variance models of the normal distribution when the data set under consideration involves asymmetric outcomes. This paper focuses on the maximum likelihood estimation of joint location and scale models of the skew-normal distribution. The proposed procedure can simultaneously estimate parameters in the location model and the scale model. Simulation studies and a real example are used to illustrate the proposed methodologies.展开更多
Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between le...Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between leaves and nonphotosynthetic structures to retrieve the leaf normal has been challenging. We used here an approach to accurately retrieve the leaf normals of camphorwood(Cinnamomum camphora) using TLS point cloud data.First, nonphotosynthetic structures were filtered by using the curvature threshold of each point. Then, the point cloud data were segmented by a voxel method and clustered by a Gaussian mixture model in each voxel. Finally, the normal vector of each cluster was computed by principal component analysis to obtain the leaf normal distribution. We collected leaf inclination angles and estimated the distribution, which we compared with the retrieved leaf normal distribution. The correlation coefficient between measurements and obtained results was 0.96, indicating a good coincidence.展开更多
The original data of Nilsson-Ehle experiment in wheat were analyzed with existent genetic knowledge. It indicated that the core of polygenic hypothesis from this experiment was that a character similarity produced by ...The original data of Nilsson-Ehle experiment in wheat were analyzed with existent genetic knowledge. It indicated that the core of polygenic hypothesis from this experiment was that a character similarity produced by additive effect of multiple genes was the basis of continuous variation. Its precondition was for effective genes to have equal effect, to show merodominance and binomial distribution and to inherit independently. In fact, quantitative characters were determined by many genes with different property, effect and behavior. So it was difficult to solve all problems of continuous variation by the aid of polygenic hypothesis. The researchers should seek new ways. With Mendelian group as research object and by means of Lyapunov central limit theorem it was proved that both genotypic value G and the environmental effect in a niche E were subordinated to the normal distribution and respectively. According to additivity of the normal distribution the phenotype P = G + E also obeyed the normal distribution P = G + E ~ and quantitative characters showed continuous variation, whether or not the linkage was presented, whether or not every gene effect was equal, whether or not there were dominance and what kind of dominance between alleles. Moreover it was discussed that the quantitative characters in self-fertilized organism and clone were submitted to the normal distribution and presented continuous variation too.展开更多
Under the condition of normal strength and stress with unknown distribution parameters but getting a completed sample respectively, a comparison among the errors of some kinds of approximate limits for structural reli...Under the condition of normal strength and stress with unknown distribution parameters but getting a completed sample respectively, a comparison among the errors of some kinds of approximate limits for structural reliability has been made in this paper, basing on the exact limits presented. All results in this paper can be used with condition logical normal distribution conveniently.展开更多
Extensive studies based on partition curve of gravity separation have been investigated. All created models are merely used to simulate density distribution at the same size fraction. However, they cannot be used to p...Extensive studies based on partition curve of gravity separation have been investigated. All created models are merely used to simulate density distribution at the same size fraction. However, they cannot be used to predictive distribution of materials depending on compound feature of density and size. According to this situation, an improved model of partition curve based on accumulation normal distribution, which was distinguished from conventional model of accumulation normal distribution for partition curve, was proposed in this paper. It could simulate density distribution at different size fractions by using the density-size compound index and conflating the partition curves at different size fractions as one partition curve. The feasibility of three compound indexes, including mass index, settlement index and transformation index, were investigated. Specific forms of the improved model were also proposed. It is found that transformation index leads to the best fitting results, while the fitting error is only 1.75 according to the fitting partition curve.展开更多
The random step maneuver with uniformly distributed starting times has the disadvantage that it cannot focus the starting time on the more efficiency time. It decreases the penetration probability. To resolve this pro...The random step maneuver with uniformly distributed starting times has the disadvantage that it cannot focus the starting time on the more efficiency time. It decreases the penetration probability. To resolve this problem, a random step penetration algorithm with normal distribution starting time is proposed. Using the shaping filters and adjoint system method, the miss distance with different starting times can be acquired. According to the penetration standard, the time window ensuring successful penetration can be calculated and it is used as the 3σ bound of the normally distributed random maneuver. Simulation results indicate that the normally distributed random maneuver has higher penetration probability than the uniformly distributed random maneuver.展开更多
The calculation of the mean difference for the inverse normal distribution can be obtained by a transformation of variable or a hard integration by parts. This paper shows a simpler formula of the mean difference of t...The calculation of the mean difference for the inverse normal distribution can be obtained by a transformation of variable or a hard integration by parts. This paper shows a simpler formula of the mean difference of the inverse normal distribution that highlights the role of the two parameters on the mean difference of the model. It makes it easier to study the relation of the mean difference with the other indexes of variability for the inverse normal distribution.展开更多
This paper proposes a new pre-processing technique to separate the most effective features from those that might deteriorate the performance of the machine learning classifiers in terms of computational costs and clas...This paper proposes a new pre-processing technique to separate the most effective features from those that might deteriorate the performance of the machine learning classifiers in terms of computational costs and classification accuracy because of their irrelevance,redundancy,or less information;this pre-processing process is often known as feature selection.This technique is based on adopting a new optimization algorithm known as generalized normal distribution optimization(GNDO)supported by the conversion of the normal distribution to a binary one using the arctangent transfer function to convert the continuous values into binary values.Further,a novel restarting strategy(RS)is proposed to preserve the diversity among the solutions within the population by identifying the solutions that exceed a specific distance from the best-so-far and replace them with the others created using an effective updating scheme.This strategy is integrated with GNDO to propose another binary variant having a high ability to preserve the diversity of the solutions for avoiding becoming stuck in local minima and accelerating convergence,namely improved GNDO(IGNDO).The proposed GNDO and IGNDO algorithms are extensively compared with seven state-of-the-art algorithms to verify their performance on thirteen medical instances taken from the UCI repository.IGNDO is shown to be superior in terms of fitness value and classification accuracy and competitive with the others in terms of the selected features.Since the principal goal in solving the FS problem is to find the appropriate subset of features that maximize classification accuracy,IGNDO is considered the best.展开更多
In this paper, we establish the stochastic ordering of median from an exchangeable trivaxiate normal vector based on the strength of the correlation coefficient. Specifically, by considering two exchangeable trivariat...In this paper, we establish the stochastic ordering of median from an exchangeable trivaxiate normal vector based on the strength of the correlation coefficient. Specifically, by considering two exchangeable trivariate normal vectors with different correlation coefficients, we show that the absolute value of the median in the vector with smaller correlation coefficient is stochastically smaller than the absolute value of the median in the vector with larger correlation coefficient. We prove this result by utilizing skew-normal distributions.展开更多
In order to solve the life problem of vacuum fluorescent display (VFD) within shorter time, and reduce the life prediction cost, a constant-step stress accelerated life test was performed with its cathode temperature ...In order to solve the life problem of vacuum fluorescent display (VFD) within shorter time, and reduce the life prediction cost, a constant-step stress accelerated life test was performed with its cathode temperature increased. Statistical analysis was done by applying logarithmic normal distribution for describing the life, and least square method (LSM) for estimating logarithmic normal parameters. Self-designed special software was used to predict the VFD life. It is verified by numerical results that the VFD life follows logarithmic normal distribution, and that the life-stress relationship satisfies linear Arrhenius equation completely. The accurate calculation of the key parameters enables the rapid estimation of VFD life.展开更多
We introduce a kind of generalized Wigner operator,whose normally ordered form can lead to the bivariatenormal distribution in p-q phase space.While this bivariate normal distribution corresponds to the pure vacuum st...We introduce a kind of generalized Wigner operator,whose normally ordered form can lead to the bivariatenormal distribution in p-q phase space.While this bivariate normal distribution corresponds to the pure vacuum state inthe generalized Wigner function phase space,it corresponds to a mixed state in the usual Wigner function phase space.展开更多
Microlatex particles of emulsion explosives determined by microphotography were studied with the law of logarithmic Gauss normal distribution, and results obtained showed that the microlatex particle just possessed th...Microlatex particles of emulsion explosives determined by microphotography were studied with the law of logarithmic Gauss normal distribution, and results obtained showed that the microlatex particle just possessed the law of logarithmic Gauss normal distribution. The particle diameter in statistical average value, such as DNL, DNS, DLS, DSV and DVM was calculated through the diagram of logarithmic Gauss normal distribution of microlatex particles of emulsion explosives, so was SW.展开更多
文摘In order to improve the fitting accuracy of college students’ test scores, this paper proposes two-component mixed generalized normal distribution, uses maximum likelihood estimation method and Expectation Conditional Maxinnization (ECM) algorithm to estimate parameters and conduct numerical simulation, and performs fitting analysis on the test scores of Linear Algebra and Advanced Mathematics of F University. The empirical results show that the two-component mixed generalized normal distribution is better than the commonly used two-component mixed normal distribution in fitting college students’ test data, and has good application value.
基金funded by a major special project of PetroChina Company Limited(No.2021DJ1003No.2023ZZ2).
文摘The intricate distribution of oil and water in tight rocks makes pinpointing oil layers challenging.While conventional identification methods offer potential solutions,their limited accuracy precludes them from being effective in their applications to unconventional reservoirs.This study employed nuclear magnetic resonance(NMR)spectrum decomposition to dissect the NMR T_(2)spectrum into multiple subspectra.Furthermore,it employed laboratory NMR experiments to ascertain the fluid properties of these sub-spectra,aiming to enhance identification accuracy.The findings indicate that fluids of distinct properties overlap in the T_(2)spectra,with bound water,movable water,bound oil,and movable oil appearing sequentially from the low-value zone to the high-value zone.Consequently,an oil layer classification scheme was proposed,which considers the physical properties of reservoirs,oil-bearing capacity,and the characteristics of both mobility and the oil-water two-phase flow.When applied to tight oil layer identification,the scheme's outcomes align closely with actual test results.A horizontal well,deployed based on these findings,has produced high-yield industrial oil flow,underscoring the precision and dependability of this new approach.
文摘Purpose–In response to the problem of insufficient traction/braking adhesion force caused by the existence of the third-body medium on the rail surface,this study aims to analyze the utilization of wheel-rail adhesion coefficient under different medium conditions and propose relevant measures for reasonable and optimized utilization of adhesion to ensure the traction/braking performance and operation safety of trains.Design/methodology/approach–Based on the PLS-160 wheel-rail adhesion simulation test rig,the study investigates the variation patterns of maximum utilized adhesion characteristics on the rail surface under different conditions of small creepage and large slip.Through statistical analysis of multiple sets of experimental data,the statistical distribution patterns of maximum utilized adhesion on the rail surface are obtained,and a method for analyzing wheel-rail adhesion redundancy based on normal distribution is proposed.The study analyzes the utilization of traction/braking adhesion,as well as adhesion redundancy,for different medium under small creepage and large slip conditions.Based on these findings,relevant measures for the reasonable and optimized utilization of adhesion are derived.Findings–When the third-body medium exists on the rail surface,the train should adopt the low-level service braking to avoid the braking skidding by extending the braking distance.Compared with the current adhesion control strategy of small creepage,adopting appropriate strategies to control the train’s adhesion coefficient near the second peak point of the adhesion coefficient-slip ratio curve in large slip can effectively improve the traction/braking adhesion redundancy and the upper limit of adhesion utilization,thereby ensuring the traction/braking performance and operation safety of the train.Originality/value–Most existing studies focus on the wheel-rail adhesion coefficient values and variation patterns under different medium conditions,without considering whether the rail surface with different medium can provide sufficient traction/braking utilized adhesion coefficient for the train.Therefore,there is a risk of traction overspeeding/braking skidding.This study analyzes whether the rail surface with different medium can provide sufficient traction/braking utilized adhesion coefficient for the train and whether there is redundancy.Based on these findings,relevant measures for the reasonable and optimized utilization of adhesion are derived to further ensure operation safety of the train.
文摘In order to reveal the complex network characteristics and evolution principle of China aviation network, the probability distribution and evolution trace of average degree of edge vertices of China aviation network were studied based on the statistics data of China civil aviation network in 1988, 1994, 2001, 2008 and 2015. According to the theory and method of complex network, the network system was constructed with the city where the airport was located as the network node and the route between cities as the edge of the network. Based on the statistical data, the average degrees of edge vertices in China aviation network in 1988, 1994, 2001, 2008 and 2015 were calculated. Using the probability statistical analysis method and regression analysis approach, it was found that the average degree of edge vertices had the probability distribution of normal function and the position parameters and scale parameters of the probability distribution had linear evolution trace.
基金Project (50378036) supported by the National Natural Science Foundation of China Project (03JJY5024) supported by the Natural Science Foundation of Hunan Province, China
文摘Firstly, using the damage model for rock based on Lemaitre hypothesis about strain equivalence, a new technique for measuring strength of rock micro-cells by adopting the Mohr-Coulomb criterion was developed, and a statistical damage evolution equation was established based on the property that strength of micro-cells is consistent with normal distribution function, through discussing the characteristics of random distributions for strength of micro-cells, then a statistical damage constitutive model that can simulate the full process of rock strain softening under specific confining pressure was set up. Secondly, a new method to determine the model parameters which can be applied to the situations under different confining pressures was proposed, by deeply studying the relations between the model parameters and characteristic parameters of the full stress-strain curve under different confining pressures. Therefore, a unified statistical damage constitutive model for rock softening which can reflect the effect of different confining pressures was set up. This model makes the physical property of model parameters explicit, contains only conventional mechanical parameters, and leads its application more convenient. Finally, the rationality of this model and its parameters-determining method were identified via comparative analyses between theoretical and experimental curves.
基金This work is partially supported by National Natural Science Foundation of China (10071090 and 10271013).
文摘This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.
基金This research was funded by the Science and Technology Department of Shaanxi Province,China,Grant Number 2019GY-036.
文摘The core technology in an intelligent video surveillance system is that detecting and recognizing abnormal behaviors timely and accurately.The key breakthrough point in recognizing abnormal behaviors is how to obtain the effective features of the picture,so as to solve the problem of recognizing them.In response to this difficulty,this paper introduces an adjustable jump link coefficients model based on the residual network.The effective coefficients for each layer of the network can be set after using this model to further improving the recognition accuracy of abnormal behavior.A convolution kernel of 1×1 size is added to reduce the number of parameters for the purpose of improving the speed of the model in this paper.In order to reduce the noise of the data edge,and at the same time,improve the accuracy of the data and speed up the training,a BN(Batch Normalization)layer is added before the activation function in this network.This paper trains this network model on the public ImageNet dataset,and then uses the transfer learning method to recognize these abnormal behaviors of human in the UTI behavior dataset processed by the YOLO_v3 target detection network.Under the same experimental conditions,compared with the original ResNet-50 model,the improved model in this paper has a 2.8%higher accuracy in recognition of abnormal behaviors on the public UTI dataset.
文摘The estimation of the functionθ=exp{αμ+bσ2} of parameters (μ,σ2) in normal distribution N(μ,σ2) is discussed. And when the prior distributions ofμandσ2 are independent, under the loss function L(θ,δ)=(θ-1×δ-1)2, the Bayesian estimation and the existence and computing method on minimax estimation are deeply discussed.
基金Supported by the National Natural Science Foundation of China(11261025,11201412)the Natural Science Foundation of Yunnan Province(2011FB016)the Program for Middle-aged Backbone Teacher,Yunnan University
文摘Joint location and scale models of the skew-normal distribution provide useful ex- tension for joint mean and variance models of the normal distribution when the data set under consideration involves asymmetric outcomes. This paper focuses on the maximum likelihood estimation of joint location and scale models of the skew-normal distribution. The proposed procedure can simultaneously estimate parameters in the location model and the scale model. Simulation studies and a real example are used to illustrate the proposed methodologies.
文摘Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between leaves and nonphotosynthetic structures to retrieve the leaf normal has been challenging. We used here an approach to accurately retrieve the leaf normals of camphorwood(Cinnamomum camphora) using TLS point cloud data.First, nonphotosynthetic structures were filtered by using the curvature threshold of each point. Then, the point cloud data were segmented by a voxel method and clustered by a Gaussian mixture model in each voxel. Finally, the normal vector of each cluster was computed by principal component analysis to obtain the leaf normal distribution. We collected leaf inclination angles and estimated the distribution, which we compared with the retrieved leaf normal distribution. The correlation coefficient between measurements and obtained results was 0.96, indicating a good coincidence.
文摘The original data of Nilsson-Ehle experiment in wheat were analyzed with existent genetic knowledge. It indicated that the core of polygenic hypothesis from this experiment was that a character similarity produced by additive effect of multiple genes was the basis of continuous variation. Its precondition was for effective genes to have equal effect, to show merodominance and binomial distribution and to inherit independently. In fact, quantitative characters were determined by many genes with different property, effect and behavior. So it was difficult to solve all problems of continuous variation by the aid of polygenic hypothesis. The researchers should seek new ways. With Mendelian group as research object and by means of Lyapunov central limit theorem it was proved that both genotypic value G and the environmental effect in a niche E were subordinated to the normal distribution and respectively. According to additivity of the normal distribution the phenotype P = G + E also obeyed the normal distribution P = G + E ~ and quantitative characters showed continuous variation, whether or not the linkage was presented, whether or not every gene effect was equal, whether or not there were dominance and what kind of dominance between alleles. Moreover it was discussed that the quantitative characters in self-fertilized organism and clone were submitted to the normal distribution and presented continuous variation too.
文摘Under the condition of normal strength and stress with unknown distribution parameters but getting a completed sample respectively, a comparison among the errors of some kinds of approximate limits for structural reliability has been made in this paper, basing on the exact limits presented. All results in this paper can be used with condition logical normal distribution conveniently.
基金the financial support from the National Natural Science Foundation of China (No. 51221462)
文摘Extensive studies based on partition curve of gravity separation have been investigated. All created models are merely used to simulate density distribution at the same size fraction. However, they cannot be used to predictive distribution of materials depending on compound feature of density and size. According to this situation, an improved model of partition curve based on accumulation normal distribution, which was distinguished from conventional model of accumulation normal distribution for partition curve, was proposed in this paper. It could simulate density distribution at different size fractions by using the density-size compound index and conflating the partition curves at different size fractions as one partition curve. The feasibility of three compound indexes, including mass index, settlement index and transformation index, were investigated. Specific forms of the improved model were also proposed. It is found that transformation index leads to the best fitting results, while the fitting error is only 1.75 according to the fitting partition curve.
文摘The random step maneuver with uniformly distributed starting times has the disadvantage that it cannot focus the starting time on the more efficiency time. It decreases the penetration probability. To resolve this problem, a random step penetration algorithm with normal distribution starting time is proposed. Using the shaping filters and adjoint system method, the miss distance with different starting times can be acquired. According to the penetration standard, the time window ensuring successful penetration can be calculated and it is used as the 3σ bound of the normally distributed random maneuver. Simulation results indicate that the normally distributed random maneuver has higher penetration probability than the uniformly distributed random maneuver.
文摘The calculation of the mean difference for the inverse normal distribution can be obtained by a transformation of variable or a hard integration by parts. This paper shows a simpler formula of the mean difference of the inverse normal distribution that highlights the role of the two parameters on the mean difference of the model. It makes it easier to study the relation of the mean difference with the other indexes of variability for the inverse normal distribution.
基金This work has supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.NRF-2021R1A2C1010362)and the Soonchunhyang University Research Fund.
文摘This paper proposes a new pre-processing technique to separate the most effective features from those that might deteriorate the performance of the machine learning classifiers in terms of computational costs and classification accuracy because of their irrelevance,redundancy,or less information;this pre-processing process is often known as feature selection.This technique is based on adopting a new optimization algorithm known as generalized normal distribution optimization(GNDO)supported by the conversion of the normal distribution to a binary one using the arctangent transfer function to convert the continuous values into binary values.Further,a novel restarting strategy(RS)is proposed to preserve the diversity among the solutions within the population by identifying the solutions that exceed a specific distance from the best-so-far and replace them with the others created using an effective updating scheme.This strategy is integrated with GNDO to propose another binary variant having a high ability to preserve the diversity of the solutions for avoiding becoming stuck in local minima and accelerating convergence,namely improved GNDO(IGNDO).The proposed GNDO and IGNDO algorithms are extensively compared with seven state-of-the-art algorithms to verify their performance on thirteen medical instances taken from the UCI repository.IGNDO is shown to be superior in terms of fitness value and classification accuracy and competitive with the others in terms of the selected features.Since the principal goal in solving the FS problem is to find the appropriate subset of features that maximize classification accuracy,IGNDO is considered the best.
文摘In this paper, we establish the stochastic ordering of median from an exchangeable trivaxiate normal vector based on the strength of the correlation coefficient. Specifically, by considering two exchangeable trivariate normal vectors with different correlation coefficients, we show that the absolute value of the median in the vector with smaller correlation coefficient is stochastically smaller than the absolute value of the median in the vector with larger correlation coefficient. We prove this result by utilizing skew-normal distributions.
文摘In order to solve the life problem of vacuum fluorescent display (VFD) within shorter time, and reduce the life prediction cost, a constant-step stress accelerated life test was performed with its cathode temperature increased. Statistical analysis was done by applying logarithmic normal distribution for describing the life, and least square method (LSM) for estimating logarithmic normal parameters. Self-designed special software was used to predict the VFD life. It is verified by numerical results that the VFD life follows logarithmic normal distribution, and that the life-stress relationship satisfies linear Arrhenius equation completely. The accurate calculation of the key parameters enables the rapid estimation of VFD life.
基金National Natural Science Foundation of China under Grant Nos.10775097 and 10874174
文摘We introduce a kind of generalized Wigner operator,whose normally ordered form can lead to the bivariatenormal distribution in p-q phase space.While this bivariate normal distribution corresponds to the pure vacuum state inthe generalized Wigner function phase space,it corresponds to a mixed state in the usual Wigner function phase space.
文摘Microlatex particles of emulsion explosives determined by microphotography were studied with the law of logarithmic Gauss normal distribution, and results obtained showed that the microlatex particle just possessed the law of logarithmic Gauss normal distribution. The particle diameter in statistical average value, such as DNL, DNS, DLS, DSV and DVM was calculated through the diagram of logarithmic Gauss normal distribution of microlatex particles of emulsion explosives, so was SW.