The t-distribution has a “fat tail” feature, which is more suitable than the normal probability density function to describe the distribution characteristics of return on assets. The difficulty of using t-distributi...The t-distribution has a “fat tail” feature, which is more suitable than the normal probability density function to describe the distribution characteristics of return on assets. The difficulty of using t-distribution to price European options is that a fat tail can lead to a deviation in one integral required for option pricing. We use a distribution called logarithmic truncated t-distribution to price European options. A risk neutral valuation method was used to obtain a European option pricing model with logarithmic truncated t-distribution.展开更多
The purpose of this paper is to propose a new model of asymmetry for square contingency tables with ordered categories. The new model may be appropriate for a square contingency table if it is reasonable to assume an ...The purpose of this paper is to propose a new model of asymmetry for square contingency tables with ordered categories. The new model may be appropriate for a square contingency table if it is reasonable to assume an underlying bivariate t-distribution with different marginal variances having any degrees of freedom. As the degrees of freedom becomes larger, the proposed model approaches the extended linear diagonals-parameter symmetry model, which may be appropriate for a square table if it is reasonable to assume an underlying bivariate normal distribution. The simulation study based on bivariate t-distribution is given. An example is given.展开更多
Optimization algorithms play a pivotal role in enhancing the performance and efficiency of systems across various scientific and engineering disciplines.To enhance the performance and alleviate the limitations of the ...Optimization algorithms play a pivotal role in enhancing the performance and efficiency of systems across various scientific and engineering disciplines.To enhance the performance and alleviate the limitations of the Northern Goshawk Optimization(NGO)algorithm,particularly its tendency towards premature convergence and entrapment in local optima during function optimization processes,this study introduces an advanced Improved Northern Goshawk Optimization(INGO)algorithm.This algorithm incorporates a multifaceted enhancement strategy to boost operational efficiency.Initially,a tent chaotic map is employed in the initialization phase to generate a diverse initial population,providing high-quality feasible solutions.Subsequently,after the first phase of the NGO’s iterative process,a whale fall strategy is introduced to prevent premature convergence into local optima.This is followed by the integration of T-distributionmutation strategies and the State Transition Algorithm(STA)after the second phase of the NGO,achieving a balanced synergy between the algorithm’s exploitation and exploration.This research evaluates the performance of INGO using 23 benchmark functions alongside the IEEE CEC 2017 benchmark functions,accompanied by a statistical analysis of the results.The experimental outcomes demonstrate INGO’s superior achievements in function optimization tasks.Furthermore,its applicability in solving engineering design problems was verified through simulations on Unmanned Aerial Vehicle(UAV)trajectory planning issues,establishing INGO’s capability in addressing complex optimization challenges.展开更多
When high-impedance faults(HIFs)occur in resonant grounded distribution networks,the current that flows is extremely weak,and the noise interference caused by the distribution network operation and the sampling error ...When high-impedance faults(HIFs)occur in resonant grounded distribution networks,the current that flows is extremely weak,and the noise interference caused by the distribution network operation and the sampling error of the measurement devices further masks the fault characteristics.Consequently,locating a fault section with high sensitivity is difficult.Unlike existing technologies,this study presents a novel fault feature identification framework that addresses this issue.The framework includes three key steps:(1)utilizing the variable mode decomposition(VMD)method to denoise the fault transient zero-sequence current(TZSC);(2)employing a manifold learning algorithm based on t-distributed stochastic neighbor embedding(t-SNE)to further reduce the redundant information of the TZSC after denoising and to visualize fault information in high-dimensional 2D space;and(3)classifying the signal of each measurement point based on the fuzzy clustering method and combining the network topology structure to determine the fault section location.Numerical simulations and field testing confirm that the proposed method accurately detects the fault location,even under the influence of strong noise interference.展开更多
The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM), may not include some important information, especially for steep slop...The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM), may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.展开更多
Inadmissibility of a traditional class of noncentrality parameter esti-mators under quadratic loss is established.The result is heuristically motivatedby the form of generalized Bayes estimators and is proved via unbi...Inadmissibility of a traditional class of noncentrality parameter esti-mators under quadratic loss is established.The result is heuristically motivatedby the form of generalized Bayes estimators and is proved via unbiased estimatorsof the risk function and a solution to an integro-differential inequality.展开更多
In order to implement the robust cluster analysis,solve the problem that the outliers in the data will have a serious disturbance to the probability density parameter estimation,and therefore affect the accuracy of cl...In order to implement the robust cluster analysis,solve the problem that the outliers in the data will have a serious disturbance to the probability density parameter estimation,and therefore affect the accuracy of clustering,a robust cluster analysis method is proposed which is based on the diversity self-paced t-mixture model.This model firstly adopts the t-distribution as the submodel which tail is easily controllable.On this basis,it utilizes the entropy penalty expectation conditional maximal algorithm as a pre-clustering step to estimate the initial parameters.After that,this model introduces l2,1-norm as a self-paced regularization term and developes a new ECM optimization algorithm,in order to select high confidence samples from each component in training.Finally,experimental results on several real-world datasets in different noise environments show that the diversity self-paced t-mixture model outperforms the state-of-the-art clustering methods.It provides significant guidance for the construction of the robust mixture distribution model.展开更多
To understand any statistical tool requires not only an understanding of the relevant computational procedures but also an awareness of the assumptions upon which the procedures are based, and the effects of violation...To understand any statistical tool requires not only an understanding of the relevant computational procedures but also an awareness of the assumptions upon which the procedures are based, and the effects of violations of these assumptions. In our earlier articles (Laverty, Miket, & Kelly [1]) and (Laverty & Kelly, [2] [3]) we used Microsoft Excel to simulate both a Hidden Markov model and heteroskedastic models showing different realizations of these models and the performance of the techniques for identifying the underlying hidden states using simulated data. The advantage of using Excel is that the simulations are regenerated when the spreadsheet is recalculated allowing the user to observe the performance of the statistical technique under different realizations of the data. In this article we will show how to use Excel to generate data from a one-way ANOVA (Analysis of Variance) model and how the statistical methods behave both when the fundamental assumptions of the model hold and when these assumptions are violated. The purpose of this article is to provide tools for individuals to gain an intuitive understanding of these violations using this readily available program.展开更多
A Student’s t-distribution is obtained from a weighted average over the standard deviation of a normal distribution, σ, when 1/σ is distributed as chi. Left truncation at q of the chi distribution in the mixing int...A Student’s t-distribution is obtained from a weighted average over the standard deviation of a normal distribution, σ, when 1/σ is distributed as chi. Left truncation at q of the chi distribution in the mixing integral leads to an effectively truncated Student’s t-distribution with tails that decay as exp (-q2t2). The effect of truncation of the chi distribution in a chi-normal mixture is investigated and expressions for the pdf, the variance, and the kurtosis of the t-like distribution that arises from the mixture of a left-truncated chi and a normal distribution are given for selected degrees of freedom 5. This work has value in pricing financial assets, in understanding the Student’s t--distribution, in statistical inference, and in analysis of data.展开更多
A general theorem on the limiting distribution of the generalized t-distribution is obtained,many applications of this theorem to some subclasses of elliptically contoured distributions in cluding multivariate normal ...A general theorem on the limiting distribution of the generalized t-distribution is obtained,many applications of this theorem to some subclasses of elliptically contoured distributions in cluding multivariate normal and multivariate t distributions are discussed.Further,their limiting distributions by density function are derived.展开更多
Edge Computing is one of the radically evolving systems through generations as it is able to effectively meet the data saving standards of consumers,providers and the workers. Requisition for Edge Computing based ite...Edge Computing is one of the radically evolving systems through generations as it is able to effectively meet the data saving standards of consumers,providers and the workers. Requisition for Edge Computing based items havebeen increasing tremendously. Apart from the advantages it holds, there remainlots of objections and restrictions, which hinders it from accomplishing the needof consumers all around the world. Some of the limitations are constraints oncomputing and hardware, functions and accessibility, remote administration andconnectivity. There is also a backlog in security due to its inability to create a trustbetween devices involved in encryption and decryption. This is because securityof data greatly depends upon faster encryption and decryption in order to transferit. In addition, its devices are considerably exposed to side channel attacks,including Power Analysis attacks that are capable of overturning the process.Constrained space and the ability of it is one of the most challenging tasks. Toprevail over from this issue we are proposing a Cryptographic LightweightEncryption Algorithm with Dimensionality Reduction in Edge Computing. Thet-Distributed Stochastic Neighbor Embedding is one of the efficient dimensionality reduction technique that greatly decreases the size of the non-linear data. Thethree dimensional image data obtained from the system, which are connected withit, are dimensionally reduced, and then lightweight encryption algorithm isemployed. Hence, the security backlog can be solved effectively using thismethod.展开更多
Some moments and limiting properties of independent Student’s t increments are studied. Inde-pendent Student’s t increments are independent draws from not-truncated, truncated, and effectively truncated Student’s t...Some moments and limiting properties of independent Student’s t increments are studied. Inde-pendent Student’s t increments are independent draws from not-truncated, truncated, and effectively truncated Student’s t-distributions with shape parameters and can be used to create random walks. It is found that sample paths created from truncated and effectively truncated Student’s t-distributions are continuous. Sample paths for Student’s t-distributions are also continuous. Student’s t increments should thus be useful in construction of stochastic processes and as noise driving terms in Langevin equations.展开更多
Amplitude variation with offset and azimuth(AVOA)inversion is a mainstream method for predicting and evaluating fracture parameters of conventional oil and gas reservoirs.However,its application to coal seams is limit...Amplitude variation with offset and azimuth(AVOA)inversion is a mainstream method for predicting and evaluating fracture parameters of conventional oil and gas reservoirs.However,its application to coal seams is limited because of the specificity of the equivalent media model for coal—also,the traditional seismic acquisition system employed in coal fields falls within a narrow azimuth.In this study,we initially derived a P‒P wave reflection coefficient approximation formula for coal seams,which is directly expressed in terms of fracture parameters using the Schoenberg linear-slide model and Hudson model.We analyzed the P‒P wave reflection coefficient’s response to the fracture parameters using a two-layer forward model.Accordingly,we designed a twostep inversion workflow for AVOA inversion of the fracture parameters.Thereafter,high-density wide-azimuth pre-stack 3D seismic data were utilized for inverting the fracture density and strike of the target coal seam.The inversion accuracy was constrained by Student’s tdistribution testing.The analysis and validation of the inversion results revealed that the relative fracture density corresponds to fault locations,with the strike of the fractures and faults mainly at 0°.Therefore,the AVOA inversion method and technical workflow proposed here can be used to efficiently predict and evaluate fracture parameters of coal seams.展开更多
For a general linear model, spherical distributions are often considered whenerrors do not have normal distribution. Several authors[1-3] studied the least squaresand James-Stein estimations for a linear model whose e...For a general linear model, spherical distributions are often considered whenerrors do not have normal distribution. Several authors[1-3] studied the least squaresand James-Stein estimations for a linear model whose errors follow multivariate t or moregeneral spherical distributions. In this paper the test problem for sphericity of errors isconsidered. We propose an exact test for the sphericity by using the conditional probabilityintegral transformation and another transformation. As an important special case, thecorresponding test statistics for multivariate t distribution are obtained.展开更多
We present new variants of Estimation of Distribution Algorithms (EDA) for large-scale continuous optimisation that extend and enhance a recently proposed random projection (RP) ensemble based approach. The main novel...We present new variants of Estimation of Distribution Algorithms (EDA) for large-scale continuous optimisation that extend and enhance a recently proposed random projection (RP) ensemble based approach. The main novelty here is to depart from the theory of RPs that require (sub-)Gaussian random matrices for norm-preservation, and instead for the purposes of high-dimensional search we propose to employ random matrices with independent and identically distributed entries drawn from a t-distribution. We analytically show that the implicitly resulting high-dimensional covariance of the search distribution is enlarged as a result. Moreover, the extent of this enlargement is controlled by a single parameter, the degree of freedom. For this reason, in the context of optimisation, such heavy tailed random matrices turn out to be preferable over the previously employed (sub-)Gaussians. Based on this observation, we then propose novel covariance adaptation schemes that are able to adapt the degree of freedom parameter during the search, and give rise to a flexible approach to balance exploration versus exploitation. We perform a thorough experimental study on high-dimensional benchmark functions, and provide statistical analyses that demonstrate the state-of-the-art performance of our approach when compared with existing alternatives in problems with 1000 search variables.展开更多
In order to improve the light welfare of Nile tilapia in aquaculture,the influence of hunger level on light spectrum preference of Nile tilapia was explored in this study.The whole experiment was based on the emptying...In order to improve the light welfare of Nile tilapia in aquaculture,the influence of hunger level on light spectrum preference of Nile tilapia was explored in this study.The whole experiment was based on the emptying of the gastrointestinal contents,and carried out under the controlled laboratory conditions.The light spectrum preference was assessed by counting the head location of fish in each experimental tank,which containing seven compartments(i.e.,red,blue,white,yellow,black,green and public area).t-Distributed Stochastic Neighbor Embedding(t-SNE)was adopted to visualize the hunger level-based dynamic preference on light spectrum in two-dimensional space.According to the clustering results,significant differences in light spectrum preferences of Nile tilapia,under the different hunger levels,were indicated.In addition,the average visit frequency in green compartment was significantly lower than that in other color compartments throughout the whole experiment,and the total visit frequency in red compartment was relatively higher during the whole experiment.展开更多
This paper investigates and discusses the use of information divergence,through the widely used Kullback–Leibler(KL)divergence,under the multivariate(generalized)γ-order normal distribution(γ-GND).The behavior of t...This paper investigates and discusses the use of information divergence,through the widely used Kullback–Leibler(KL)divergence,under the multivariate(generalized)γ-order normal distribution(γ-GND).The behavior of the KL divergence,as far as its symmetricity is concerned,is studied by calculating the divergence of γ-GND over the Student’s multivariate t-distribution and vice versa.Certain special cases are also given and discussed.Furthermore,three symmetrized forms of the KL divergence,i.e.,the Jeffreys distance,the geometric-KL as well as the harmonic-KL distances,are computed between two members of the γ-GND family,while the corresponding differences between those information distances are also discussed.展开更多
Three Bayesian related approaches,namely,variational Bayesian(VB),minimum message length(MML)and Bayesian Ying-Yang(BYY)harmony learning,have been applied to automatically determining an appropriate number of componen...Three Bayesian related approaches,namely,variational Bayesian(VB),minimum message length(MML)and Bayesian Ying-Yang(BYY)harmony learning,have been applied to automatically determining an appropriate number of components during learning Gaussian mixture model(GMM).This paper aims to provide a comparative investigation on these approaches with not only a Jeffreys prior but also a conjugate Dirichlet-Normal-Wishart(DNW)prior on GMM.In addition to adopting the existing algorithms either directly or with some modifications,the algorithm for VB with Jeffreys prior and the algorithm for BYY with DNW prior are developed in this paper to fill the missing gap.The performances of automatic model selection are evaluated through extensive experiments,with several empirical findings:1)Considering priors merely on the mixing weights,each of three approaches makes biased mistakes,while considering priors on all the parameters of GMM makes each approach reduce its bias and also improve its performance.2)As Jeffreys prior is replaced by the DNW prior,all the three approaches improve their performances.Moreover,Jeffreys prior makes MML slightly better than VB,while the DNW prior makes VB better than MML.3)As the hyperparameters of DNW prior are further optimized by each of its own learning principle,BYY improves its performances while VB and MML deteriorate their performances when there are too many free hyper-parameters.Actually,VB and MML lack a good guide for optimizing the hyper-parameters of DNW prior.4)BYY considerably outperforms both VB and MML for any type of priors and whether hyper-parameters are optimized.Being different from VB and MML that rely on appropriate priors to perform model selection,BYY does not highly depend on the type of priors.It has model selection ability even without priors and performs already very well with Jeffreys prior,and incrementally improves as Jeffreys prior is replaced by the DNW prior.Finally,all algorithms are applied on the Berkeley segmentation database of real world images.Again,BYY considerably outperforms both VB and MML,especially in detecting the objects of interest from a confusing background.展开更多
文摘The t-distribution has a “fat tail” feature, which is more suitable than the normal probability density function to describe the distribution characteristics of return on assets. The difficulty of using t-distribution to price European options is that a fat tail can lead to a deviation in one integral required for option pricing. We use a distribution called logarithmic truncated t-distribution to price European options. A risk neutral valuation method was used to obtain a European option pricing model with logarithmic truncated t-distribution.
文摘The purpose of this paper is to propose a new model of asymmetry for square contingency tables with ordered categories. The new model may be appropriate for a square contingency table if it is reasonable to assume an underlying bivariate t-distribution with different marginal variances having any degrees of freedom. As the degrees of freedom becomes larger, the proposed model approaches the extended linear diagonals-parameter symmetry model, which may be appropriate for a square table if it is reasonable to assume an underlying bivariate normal distribution. The simulation study based on bivariate t-distribution is given. An example is given.
基金supported by theKey Research and Development Project of Hubei Province(No.2023BAB094)the Key Project of Science and Technology Research Program of Hubei Educational Committee(No.D20211402)the Open Foundation of HubeiKey Laboratory for High-Efficiency Utilization of Solar Energy and Operation Control of Energy Storage System(No.HBSEES202309).
文摘Optimization algorithms play a pivotal role in enhancing the performance and efficiency of systems across various scientific and engineering disciplines.To enhance the performance and alleviate the limitations of the Northern Goshawk Optimization(NGO)algorithm,particularly its tendency towards premature convergence and entrapment in local optima during function optimization processes,this study introduces an advanced Improved Northern Goshawk Optimization(INGO)algorithm.This algorithm incorporates a multifaceted enhancement strategy to boost operational efficiency.Initially,a tent chaotic map is employed in the initialization phase to generate a diverse initial population,providing high-quality feasible solutions.Subsequently,after the first phase of the NGO’s iterative process,a whale fall strategy is introduced to prevent premature convergence into local optima.This is followed by the integration of T-distributionmutation strategies and the State Transition Algorithm(STA)after the second phase of the NGO,achieving a balanced synergy between the algorithm’s exploitation and exploration.This research evaluates the performance of INGO using 23 benchmark functions alongside the IEEE CEC 2017 benchmark functions,accompanied by a statistical analysis of the results.The experimental outcomes demonstrate INGO’s superior achievements in function optimization tasks.Furthermore,its applicability in solving engineering design problems was verified through simulations on Unmanned Aerial Vehicle(UAV)trajectory planning issues,establishing INGO’s capability in addressing complex optimization challenges.
基金supported in part by the Science and Technology Program of State Grid Corporation of China(No.5108-202218280A-2-75-XG)the Fundamental Research Funds for the Central Universities(No.B200203129)the Postgraduate Research and Practice Innovation Program of Jiangsu Province(No.KYCX20_0432)。
文摘When high-impedance faults(HIFs)occur in resonant grounded distribution networks,the current that flows is extremely weak,and the noise interference caused by the distribution network operation and the sampling error of the measurement devices further masks the fault characteristics.Consequently,locating a fault section with high sensitivity is difficult.Unlike existing technologies,this study presents a novel fault feature identification framework that addresses this issue.The framework includes three key steps:(1)utilizing the variable mode decomposition(VMD)method to denoise the fault transient zero-sequence current(TZSC);(2)employing a manifold learning algorithm based on t-distributed stochastic neighbor embedding(t-SNE)to further reduce the redundant information of the TZSC after denoising and to visualize fault information in high-dimensional 2D space;and(3)classifying the signal of each measurement point based on the fuzzy clustering method and combining the network topology structure to determine the fault section location.Numerical simulations and field testing confirm that the proposed method accurately detects the fault location,even under the influence of strong noise interference.
基金supported by the National Natural Science Foundation of China (Grant No. 90510017)
文摘The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM), may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.
文摘Inadmissibility of a traditional class of noncentrality parameter esti-mators under quadratic loss is established.The result is heuristically motivatedby the form of generalized Bayes estimators and is proved via unbiased estimatorsof the risk function and a solution to an integro-differential inequality.
基金Supported by the 13th 5-Year National Science and Technology Supporting Project(2018YFC2000302)。
文摘In order to implement the robust cluster analysis,solve the problem that the outliers in the data will have a serious disturbance to the probability density parameter estimation,and therefore affect the accuracy of clustering,a robust cluster analysis method is proposed which is based on the diversity self-paced t-mixture model.This model firstly adopts the t-distribution as the submodel which tail is easily controllable.On this basis,it utilizes the entropy penalty expectation conditional maximal algorithm as a pre-clustering step to estimate the initial parameters.After that,this model introduces l2,1-norm as a self-paced regularization term and developes a new ECM optimization algorithm,in order to select high confidence samples from each component in training.Finally,experimental results on several real-world datasets in different noise environments show that the diversity self-paced t-mixture model outperforms the state-of-the-art clustering methods.It provides significant guidance for the construction of the robust mixture distribution model.
文摘To understand any statistical tool requires not only an understanding of the relevant computational procedures but also an awareness of the assumptions upon which the procedures are based, and the effects of violations of these assumptions. In our earlier articles (Laverty, Miket, & Kelly [1]) and (Laverty & Kelly, [2] [3]) we used Microsoft Excel to simulate both a Hidden Markov model and heteroskedastic models showing different realizations of these models and the performance of the techniques for identifying the underlying hidden states using simulated data. The advantage of using Excel is that the simulations are regenerated when the spreadsheet is recalculated allowing the user to observe the performance of the statistical technique under different realizations of the data. In this article we will show how to use Excel to generate data from a one-way ANOVA (Analysis of Variance) model and how the statistical methods behave both when the fundamental assumptions of the model hold and when these assumptions are violated. The purpose of this article is to provide tools for individuals to gain an intuitive understanding of these violations using this readily available program.
文摘A Student’s t-distribution is obtained from a weighted average over the standard deviation of a normal distribution, σ, when 1/σ is distributed as chi. Left truncation at q of the chi distribution in the mixing integral leads to an effectively truncated Student’s t-distribution with tails that decay as exp (-q2t2). The effect of truncation of the chi distribution in a chi-normal mixture is investigated and expressions for the pdf, the variance, and the kurtosis of the t-like distribution that arises from the mixture of a left-truncated chi and a normal distribution are given for selected degrees of freedom 5. This work has value in pricing financial assets, in understanding the Student’s t--distribution, in statistical inference, and in analysis of data.
基金This project is supported by the National Natural Science Foundation of Chinaby Grant DA01070 from U.S. Public Health Service
文摘A general theorem on the limiting distribution of the generalized t-distribution is obtained,many applications of this theorem to some subclasses of elliptically contoured distributions in cluding multivariate normal and multivariate t distributions are discussed.Further,their limiting distributions by density function are derived.
文摘Edge Computing is one of the radically evolving systems through generations as it is able to effectively meet the data saving standards of consumers,providers and the workers. Requisition for Edge Computing based items havebeen increasing tremendously. Apart from the advantages it holds, there remainlots of objections and restrictions, which hinders it from accomplishing the needof consumers all around the world. Some of the limitations are constraints oncomputing and hardware, functions and accessibility, remote administration andconnectivity. There is also a backlog in security due to its inability to create a trustbetween devices involved in encryption and decryption. This is because securityof data greatly depends upon faster encryption and decryption in order to transferit. In addition, its devices are considerably exposed to side channel attacks,including Power Analysis attacks that are capable of overturning the process.Constrained space and the ability of it is one of the most challenging tasks. Toprevail over from this issue we are proposing a Cryptographic LightweightEncryption Algorithm with Dimensionality Reduction in Edge Computing. Thet-Distributed Stochastic Neighbor Embedding is one of the efficient dimensionality reduction technique that greatly decreases the size of the non-linear data. Thethree dimensional image data obtained from the system, which are connected withit, are dimensionally reduced, and then lightweight encryption algorithm isemployed. Hence, the security backlog can be solved effectively using thismethod.
文摘Some moments and limiting properties of independent Student’s t increments are studied. Inde-pendent Student’s t increments are independent draws from not-truncated, truncated, and effectively truncated Student’s t-distributions with shape parameters and can be used to create random walks. It is found that sample paths created from truncated and effectively truncated Student’s t-distributions are continuous. Sample paths for Student’s t-distributions are also continuous. Student’s t increments should thus be useful in construction of stochastic processes and as noise driving terms in Langevin equations.
基金supported by the University Synergy Innovation Program of Anhui Province(Nos.GXXT-2021-016 and GXXT-2019-029)the National Natural Science Foundation of China(Grant No.41902167)the Institute of Energy,Hefei Comprehensive National Science Center(No.21KZS215).
文摘Amplitude variation with offset and azimuth(AVOA)inversion is a mainstream method for predicting and evaluating fracture parameters of conventional oil and gas reservoirs.However,its application to coal seams is limited because of the specificity of the equivalent media model for coal—also,the traditional seismic acquisition system employed in coal fields falls within a narrow azimuth.In this study,we initially derived a P‒P wave reflection coefficient approximation formula for coal seams,which is directly expressed in terms of fracture parameters using the Schoenberg linear-slide model and Hudson model.We analyzed the P‒P wave reflection coefficient’s response to the fracture parameters using a two-layer forward model.Accordingly,we designed a twostep inversion workflow for AVOA inversion of the fracture parameters.Thereafter,high-density wide-azimuth pre-stack 3D seismic data were utilized for inverting the fracture density and strike of the target coal seam.The inversion accuracy was constrained by Student’s tdistribution testing.The analysis and validation of the inversion results revealed that the relative fracture density corresponds to fault locations,with the strike of the fractures and faults mainly at 0°.Therefore,the AVOA inversion method and technical workflow proposed here can be used to efficiently predict and evaluate fracture parameters of coal seams.
文摘For a general linear model, spherical distributions are often considered whenerrors do not have normal distribution. Several authors[1-3] studied the least squaresand James-Stein estimations for a linear model whose errors follow multivariate t or moregeneral spherical distributions. In this paper the test problem for sphericity of errors isconsidered. We propose an exact test for the sphericity by using the conditional probabilityintegral transformation and another transformation. As an important special case, thecorresponding test statistics for multivariate t distribution are obtained.
基金partly funded by a Ph.D.scholarship from the Islamic Development Bankfunded by the Engineering and Physical Sciences Research Council of UK under Fellowship Grant EP/P004245/1.
文摘We present new variants of Estimation of Distribution Algorithms (EDA) for large-scale continuous optimisation that extend and enhance a recently proposed random projection (RP) ensemble based approach. The main novelty here is to depart from the theory of RPs that require (sub-)Gaussian random matrices for norm-preservation, and instead for the purposes of high-dimensional search we propose to employ random matrices with independent and identically distributed entries drawn from a t-distribution. We analytically show that the implicitly resulting high-dimensional covariance of the search distribution is enlarged as a result. Moreover, the extent of this enlargement is controlled by a single parameter, the degree of freedom. For this reason, in the context of optimisation, such heavy tailed random matrices turn out to be preferable over the previously employed (sub-)Gaussians. Based on this observation, we then propose novel covariance adaptation schemes that are able to adapt the degree of freedom parameter during the search, and give rise to a flexible approach to balance exploration versus exploitation. We perform a thorough experimental study on high-dimensional benchmark functions, and provide statistical analyses that demonstrate the state-of-the-art performance of our approach when compared with existing alternatives in problems with 1000 search variables.
基金supported by the National Key R&D Program of China(Grant No.2017YFB0404000)the Key R&D Program of Ningxia Hui Autonomous Region(Grant No.2018BBF02009)Open Fund of Yunnan Province Key Laboratory of Food Processing and Safety Control(Grant No.K16-507106-007)。
文摘In order to improve the light welfare of Nile tilapia in aquaculture,the influence of hunger level on light spectrum preference of Nile tilapia was explored in this study.The whole experiment was based on the emptying of the gastrointestinal contents,and carried out under the controlled laboratory conditions.The light spectrum preference was assessed by counting the head location of fish in each experimental tank,which containing seven compartments(i.e.,red,blue,white,yellow,black,green and public area).t-Distributed Stochastic Neighbor Embedding(t-SNE)was adopted to visualize the hunger level-based dynamic preference on light spectrum in two-dimensional space.According to the clustering results,significant differences in light spectrum preferences of Nile tilapia,under the different hunger levels,were indicated.In addition,the average visit frequency in green compartment was significantly lower than that in other color compartments throughout the whole experiment,and the total visit frequency in red compartment was relatively higher during the whole experiment.
文摘This paper investigates and discusses the use of information divergence,through the widely used Kullback–Leibler(KL)divergence,under the multivariate(generalized)γ-order normal distribution(γ-GND).The behavior of the KL divergence,as far as its symmetricity is concerned,is studied by calculating the divergence of γ-GND over the Student’s multivariate t-distribution and vice versa.Certain special cases are also given and discussed.Furthermore,three symmetrized forms of the KL divergence,i.e.,the Jeffreys distance,the geometric-KL as well as the harmonic-KL distances,are computed between two members of the γ-GND family,while the corresponding differences between those information distances are also discussed.
基金The work described in this paper was supported by a grant of the General Research Fund(GRF)from the Research Grant Council of Hong Kong SAR(Project No.CUHK418011E).
文摘Three Bayesian related approaches,namely,variational Bayesian(VB),minimum message length(MML)and Bayesian Ying-Yang(BYY)harmony learning,have been applied to automatically determining an appropriate number of components during learning Gaussian mixture model(GMM).This paper aims to provide a comparative investigation on these approaches with not only a Jeffreys prior but also a conjugate Dirichlet-Normal-Wishart(DNW)prior on GMM.In addition to adopting the existing algorithms either directly or with some modifications,the algorithm for VB with Jeffreys prior and the algorithm for BYY with DNW prior are developed in this paper to fill the missing gap.The performances of automatic model selection are evaluated through extensive experiments,with several empirical findings:1)Considering priors merely on the mixing weights,each of three approaches makes biased mistakes,while considering priors on all the parameters of GMM makes each approach reduce its bias and also improve its performance.2)As Jeffreys prior is replaced by the DNW prior,all the three approaches improve their performances.Moreover,Jeffreys prior makes MML slightly better than VB,while the DNW prior makes VB better than MML.3)As the hyperparameters of DNW prior are further optimized by each of its own learning principle,BYY improves its performances while VB and MML deteriorate their performances when there are too many free hyper-parameters.Actually,VB and MML lack a good guide for optimizing the hyper-parameters of DNW prior.4)BYY considerably outperforms both VB and MML for any type of priors and whether hyper-parameters are optimized.Being different from VB and MML that rely on appropriate priors to perform model selection,BYY does not highly depend on the type of priors.It has model selection ability even without priors and performs already very well with Jeffreys prior,and incrementally improves as Jeffreys prior is replaced by the DNW prior.Finally,all algorithms are applied on the Berkeley segmentation database of real world images.Again,BYY considerably outperforms both VB and MML,especially in detecting the objects of interest from a confusing background.