The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology pro...The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology provided by deep learning-based video surveillance for unmanned inspection of electrical equipment,this paper uses the bottleneck attention module(BAM)attention mechanism to improve the Solov2 model and proposes a new electrical equipment segmentation mode.Firstly,the BAM attention mechanism is integrated into the feature extraction network to adaptively learn the correlation between feature channels,thereby improving the expression ability of the feature map;secondly,the weighted sum of CrossEntropy Loss and Dice loss is designed as the mask loss to improve the segmentation accuracy and robustness of the model;finally,the non-maximal suppression(NMS)algorithm to better handle the overlap problem in instance segmentation.Experimental results show that the proposed method achieves an average segmentation accuracy of mAP of 80.4% on three types of electrical equipment datasets,including transformers,insulators and voltage transformers,which improve the detection accuracy by more than 5.7% compared with the original Solov2 model.The segmentation model proposed can provide a focusing technical means for the intelligent management of power systems.展开更多
LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quad...LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quadratic loss function in most of classical credibility models. The Bayes premium and the credibility premium are derived under LINEX loss function. The consistency of Bayes premium and credibility premium were also checked. Finally, the simulation was introduced to show the differences between the credibility estimator we derived and the classical one.展开更多
The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence...The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.展开更多
Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other m...Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other model parameters,so the economic design method is often not effective in producing charts that can quickly detect small shifts before substantial losses occur;on the other hand,in many cases,only one type of process shift or only one pair of process shifts are taken into consideration,which may not correctly reflect the actual process conditions.To improve the behavior of economic design of control chart,a cost & loss model with Taguchi's loss function for the economic design of X & S control charts is embellished,which is regarded as an optimization problem with multiple statistical constraints.The optimization design is also carried out based on a number of combinations of process shifts collected from the field operation of the conventional control charts,thus more hidden information about the shift combinations is mined and employed to the optimization design of control charts.At the same time,an improved particle swarm optimization(IPSO) is developed to solve such an optimization problem in design of X & S control charts,IPSO is first tested for several benchmark problems from the literature and evaluated with standard performance metrics.Experimental results show that the proposed algorithm has significant advantages on obtaining the optimal design parameters of the charts.The proposed method can substantially reduce the total cost(or loss) of the control charts,and it will be a promising tool for economic design of control charts.展开更多
The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is ...The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is very close in value to th e theoretical surface energy loss function in the lower energy loss region but g radually approaches the theoretical bulk energy loss function in the higher ener gy loss region. Moreover, the intensity corresponding to surface excitation in e ffective energy loss functions decreases with the increase of primary electron e nergy. These facts show that the present effective energy loss function describe s not only surface excitation but also bulk excitation. At last, REELS spectra s imulated by Monte Carlo method based on use of the effective energy loss functio ns has reproduced the experimental REELS spectra with considerable success.展开更多
Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be di...Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.展开更多
Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss...Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given.展开更多
We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs,...We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs, CdS, SiO2, ZnS, ZnSe, ZnTe) for application to surface electron spectroscopy analysis. The experimental energy-loss function, which is derived from measured optical data, is fitted into a finite sum of formula based on the Drude-Lindhard dielectric model. By checking the oscillator strength-sum and perfect- screening-sum rules, we have validated the high accuracy of the fitting results. Further-more, based on the fitted parameters, the simulated reflection electron energy-loss spec- troscopy (REELS) spectrum shows a good agreement with experiment. The calculated fitting parameters of energy loss function are stored in an open and online database at http://micro.ustc.edu.cn/ELF/ELF.html.展开更多
In this paper, MLINEX loss function was considered to solve the problem of high premium in credibility models. The Bayes premium and credibility premium were obtained under MLINEX loss function by using a symmetric qu...In this paper, MLINEX loss function was considered to solve the problem of high premium in credibility models. The Bayes premium and credibility premium were obtained under MLINEX loss function by using a symmetric quadratic loss function. A credibility model with multiple contracts was established and the corresponding credibility estimator was derived under MLINEX loss function. For this model the estimations of the structure parameters and a numerical example were also given.展开更多
With the continuous development of face recognition network,the selection of loss function plays an increasingly important role in improving accuracy.The loss function of face recognition network needs to minimize the...With the continuous development of face recognition network,the selection of loss function plays an increasingly important role in improving accuracy.The loss function of face recognition network needs to minimize the intra-class distance while expanding the inter-class distance.So far,one of our mainstream loss function optimization methods is to add penalty terms,such as orthogonal loss,to further constrain the original loss function.The other is to optimize using the loss based on angular/cosine margin.The last is Triplet loss and a new type of joint optimization based on HST Loss and ACT Loss.In this paper,based on the three methods with good practical performance and the joint optimization method,various loss functions are thoroughly reviewed.展开更多
Plateau forest plays an important role in the high-altitude ecosystem,and contributes to the global carbon cycle.Plateau forest monitoring request in-suit data from field investigation.With recent development of the r...Plateau forest plays an important role in the high-altitude ecosystem,and contributes to the global carbon cycle.Plateau forest monitoring request in-suit data from field investigation.With recent development of the remote sensing technic,large-scale satellite data become available for surface monitoring.Due to the various information contained in the remote sensing data,obtain accurate plateau forest segmentation from the remote sensing imagery still remain challenges.Recent developed deep learning(DL)models such as deep convolutional neural network(CNN)has been widely used in image processing tasks,and shows possibility for remote sensing segmentation.However,due to the unique characteristics and growing environment of the plateau forest,generate feature with high robustness needs to design structures with high robustness.Aiming at the problem that the existing deep learning segmentation methods are difficult to generate the accurate boundary of the plateau forest within the satellite imagery,we propose a method of using boundary feature maps for collaborative learning.There are three improvements in this article.First,design a multi input model for plateau forest segmentation,including the boundary feature map as an additional input label to increase the amount of information at the input.Second,we apply a strong boundary search algorithm to obtain boundary value,and propose a boundary value loss function.Third,improve the Unet segmentation network and combine dense block to improve the feature reuse ability and reduces the image information loss of the model during training.We then demonstrate the utility of our method by detecting plateau forest regions from ZY-3 satellite regarding to Sanjiangyuan nature reserve.The experimental results show that the proposed method can utilize multiple feature information comprehensively which is beneficial to extracting information from boundary,and the detection accuracy is generally higher than several state-of-art algorithms.As a result of this investigation,the study will contribute in several ways to our understanding of DL for region detection and will provide a basis for further researches.展开更多
Prakash and Singh presented the shrinkage testimators under the invariant version of LINEX loss function for the scale parameter of an exponential distribution in presence Type-II censored data. In this paper, we exte...Prakash and Singh presented the shrinkage testimators under the invariant version of LINEX loss function for the scale parameter of an exponential distribution in presence Type-II censored data. In this paper, we extend this approach to gamma distribution, as Prakash and Singh’s paper is a special case of this paper. In fact, some shrinkage testimators for the scale parameter of a gamma distribution, when Type-II censored data are available, have been suggested under the LINEX loss function assuming the shape parameter is to be known. The comparisons of the proposed testimators have been made with improved estimator. All these estimators are compared empirically using Monte Carlo simulation.展开更多
Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function ...Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function by using its intensity function. The Bayesian analysis applicability to the Power Law Process is justified using real software failure times. The choice of a loss function is an important entity of the Bayesian settings. The analytical estimate of likelihood-based Bayesian reliability estimates of the Power Law Process under the squared error and Higgins-Tsokos loss functions were obtained for different prior knowledge of its key parameter. As a result of a simulation analysis and using real data, the Bayesian reliability estimate under the Higgins-Tsokos loss function not only is robust as the Bayesian reliability estimate under the squared error loss function but also performed better, where both are superior to the maximum likelihood reliability estimate. A sensitivity analysis resulted in the Bayesian estimate of the reliability function being sensitive to the prior, whether parametric or non-parametric, and to the loss function. An interactive user interface application was additionally developed using Wolfram language to compute and visualize the Bayesian and maximum likelihood estimates of the intensity and reliability functions of the Power Law Process for a given data.展开更多
The multiple patterns of internal solitary wave interactions(ISWI)are a complex oceanic phenomenon.Satellite remote sensing techniques indirectly detect these ISWI,but do not provide information on their detailed stru...The multiple patterns of internal solitary wave interactions(ISWI)are a complex oceanic phenomenon.Satellite remote sensing techniques indirectly detect these ISWI,but do not provide information on their detailed structure and dynamics.Recently,the authors considered a three-layer fluid with shear flow and developed a(2+1)Kadomtsev-Petviashvili(KP)model that is capable of describing five types of oceanic ISWI,including O-type,P-type,TO-type,TP-type,and Y-shaped.Deep learning models,particularly physics-informed neural networks(PINN),are widely used in the field of fluids and internal solitary waves.However,the authors find that the amplitude of internal solitary waves is much smaller than the wavelength and the ISWI occur at relatively large spatial scales,and these characteristics lead to an imbalance in the loss function of the PINN model.To solve this problem,the authors introduce two weighted loss function methods,the fixed weighing and the adaptive weighting methods,to improve the PINN model.This successfully simulated the detailed structure and dynamics of ISWI,with simulation results corresponding to the satellite images.In particular,the adaptive weighting method can automatically update the weights of different terms in the loss function and outperforms the fixed weighting method in terms of generalization ability.展开更多
In this paper, the isogeometric analysis (IGA) is employed to develop an acoustic radiation model for a double plate-acoustic cavity coupling system, with a focus on analyzing the sound transmission loss (STL). The fu...In this paper, the isogeometric analysis (IGA) is employed to develop an acoustic radiation model for a double plate-acoustic cavity coupling system, with a focus on analyzing the sound transmission loss (STL). The functionally graded (FG) plate exhibits a different material properties in-plane, and the power-law rule is adopted as the governing principle for material mixing. To validate the harmonic response and demonstrate the accuracy and convergence of the isogeometric modeling, ANASYS is utilized to compare with numerical examples. A plane wave serves as the acoustic excitation, and the Rayleigh integral is applied to discretize the radiated plate. The STL results are compared with the literature, confirming the reliability of the coupling system. Finally, the investigation is conducted to study impact of cavity depth and power-law parameter on the STL.展开更多
Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,diseas...Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,disease diagnosis,face detection and so on.The loss function is the core research content of SVM,and its variational properties play an important role in the analysis of optimality conditions,the design of optimization algorithms,the representation of support vectors and the research of dual problems.This paper summarizes and analyzes the 0-1 loss function and its eighteen popular surrogate loss functions in SVM,and gives three variational properties of these loss functions:subdifferential,proximal operator and Fenchel conjugate,where the nine proximal operators and fifteen Fenchel conjugates are given by this paper.展开更多
The basic purpose of a quality loss function is to evaluate a loss to customers in a quantitativemanner.Although there are several multivariate loss functions that have been proposed and studied inthe literature,it ha...The basic purpose of a quality loss function is to evaluate a loss to customers in a quantitativemanner.Although there are several multivariate loss functions that have been proposed and studied inthe literature,it has room for improvement.A good multivariate loss function should represent anappropriate compromise in terms of both process economics and the correlation structure amongvarious responses.More important,it should be easily understood and implemented in practice.According to this criterion,we first introduce a pragmatic dimensionless multivariate loss functionproposed by Artiles-Leon,then we improve the multivariate loss function in two respects:one ismaking it suitable for all three types of quality characteristics;the other is considering correlationstructure among the various responses,which makes the improved multivariate loss function moreadequate in the real world.On the bases of these,an example from industrial practice is provided tocompare our improved method with other methods,and last,some reviews are presented inconclusion.展开更多
A generalization of Zellner’s balanced loss function is proposed. General admissibility in a general multivariate linear model is investigated under the generalized balanced loss function. And the sufficient and nece...A generalization of Zellner’s balanced loss function is proposed. General admissibility in a general multivariate linear model is investigated under the generalized balanced loss function. And the sufficient and necessary conditions for linear estimators to be generally admissible in classes of homogeneous and nonhomogeneous linear estimators are given, respectively.展开更多
Soft margin support vector machine(SVM)with hinge loss function is an important classification algorithm,which has been widely used in image recognition,text classification and so on.However,solving soft margin SVM wi...Soft margin support vector machine(SVM)with hinge loss function is an important classification algorithm,which has been widely used in image recognition,text classification and so on.However,solving soft margin SVM with hinge loss function generally entails the sub-gradient projection algorithm,which is very time-consuming when processing big training data set.To achieve it,an efficient quantum algorithm is proposed.Specifically,this algorithm implements the key task of the sub-gradient projection algorithm to obtain the classical sub-gradients in each iteration,which is mainly based on quantum amplitude estimation and amplification algorithm and the controlled rotation operator.Compared with its classical counterpart,this algorithm has a quadratic speedup on the number of training data points.It is worth emphasizing that the optimal model parameters obtained by this algorithm are in the classical form rather than in the quantum state form.This enables the algorithm to classify new data at little cost when the optimal model parameters are determined.展开更多
Purpose-We propose a Machine Learning(ML)approach that will be trained from the available financial data and is able to gain the trends over the data and then uses the acquired knowledge for a more accurate forecastin...Purpose-We propose a Machine Learning(ML)approach that will be trained from the available financial data and is able to gain the trends over the data and then uses the acquired knowledge for a more accurate forecasting of financial series.This work will provide a more precise results when weighed up to aged financial series forecasting algorithms.The LSTM Classic will be used to forecast the momentum of the Financial Series Index and also applied to its commodities.The network will be trained and evaluated for accuracy with various sizes of data sets,i.e.weekly historical data of MCX,GOLD,COPPER and the results will be calculated.Design/methodology/approach-Desirable LSTM model for script price forecasting from the perspective of minimizing MSE.The approach which we have followed is shown below.(1)Acquire the Dataset.(2)Define your training and testing columns in the dataset.(3)Transform the input value using scalar.(4)Define the custom loss function.(5)Build and Compile the model.(6)Visualise the improvements in results.Findings-Financial series is one of the very aged techniques where a commerce person would commerce financial scripts,make business and earn some wealth from these companies that vend a part of their business on trading manifesto.Forecasting financial script prices is complex tasks that consider extensive human-computer interaction.Due to the correlated nature of financial series prices,conventional batch processing methods like an artificial neural network,convolutional neural network,cannot be utilised efficiently for financial market analysis.We propose an online learning algorithm that utilises an upgraded of recurrent neural networks called long short-term memory Classic(LSTM).The LSTM Classic is quite different from normal LSTM as it has customised loss function in it.This LSTM Classic avoids long-term dependence on its metrics issues because of its unique internal storage unit structure,and it helps forecast financial time series.Financial Series Index is the combination of various commodities(time series).This makes Financial Index more reliable than the financial time series as it does not show a drastic change in its value even some of its commodities are affected.This work will provide a more precise results when weighed up to aged financial series forecasting algorithms.Originality/value-We had built the customised loss function model by using LSTM scheme and have experimented on MCX index and as well as on its commodities and improvements in results are calculated for every epoch that we run for the whole rows present in the dataset.For every epoch we can visualise the improvements in loss.One more improvement that can be done to our model that the relationship between price difference and directional loss is specific to other financial scripts.Deep evaluations can be done to identify the best combination of these for a particular stock to obtain better results.展开更多
基金Jilin Science and Technology Development Plan Project(No.20200403075SF)Doctoral Research Start-Up Fund of Northeast Electric Power University(No.BSJXM-2018202).
文摘The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology provided by deep learning-based video surveillance for unmanned inspection of electrical equipment,this paper uses the bottleneck attention module(BAM)attention mechanism to improve the Solov2 model and proposes a new electrical equipment segmentation mode.Firstly,the BAM attention mechanism is integrated into the feature extraction network to adaptively learn the correlation between feature channels,thereby improving the expression ability of the feature map;secondly,the weighted sum of CrossEntropy Loss and Dice loss is designed as the mask loss to improve the segmentation accuracy and robustness of the model;finally,the non-maximal suppression(NMS)algorithm to better handle the overlap problem in instance segmentation.Experimental results show that the proposed method achieves an average segmentation accuracy of mAP of 80.4% on three types of electrical equipment datasets,including transformers,insulators and voltage transformers,which improve the detection accuracy by more than 5.7% compared with the original Solov2 model.The segmentation model proposed can provide a focusing technical means for the intelligent management of power systems.
基金Supported by the NNSF of China(71001046)Supported by the NSF of Jiangxi Province(20114BAB211004)
文摘LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quadratic loss function in most of classical credibility models. The Bayes premium and the credibility premium are derived under LINEX loss function. The consistency of Bayes premium and credibility premium were also checked. Finally, the simulation was introduced to show the differences between the credibility estimator we derived and the classical one.
文摘The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.
基金supported by Defense Industrial Technology Development Program of China (Grant No. A2520110003)
文摘Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other model parameters,so the economic design method is often not effective in producing charts that can quickly detect small shifts before substantial losses occur;on the other hand,in many cases,only one type of process shift or only one pair of process shifts are taken into consideration,which may not correctly reflect the actual process conditions.To improve the behavior of economic design of control chart,a cost & loss model with Taguchi's loss function for the economic design of X & S control charts is embellished,which is regarded as an optimization problem with multiple statistical constraints.The optimization design is also carried out based on a number of combinations of process shifts collected from the field operation of the conventional control charts,thus more hidden information about the shift combinations is mined and employed to the optimization design of control charts.At the same time,an improved particle swarm optimization(IPSO) is developed to solve such an optimization problem in design of X & S control charts,IPSO is first tested for several benchmark problems from the literature and evaluated with standard performance metrics.Experimental results show that the proposed algorithm has significant advantages on obtaining the optimal design parameters of the charts.The proposed method can substantially reduce the total cost(or loss) of the control charts,and it will be a promising tool for economic design of control charts.
基金This work was supported by the National Natural Science Foundation of China(No.10025420,No.20075026,No.60306006 and No.90206009)the post-doctoral fellowship provided by a Grant-in-Aid for Creative Scientific Research of Japanese govermment(No.13GS0022).The authors would also like to thank Dr.H.Yoshikawa,National Institute for Materials Science of Japan,and Dr.T.Nagatomi,Osaka University,for their helpful comments.
文摘The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is very close in value to th e theoretical surface energy loss function in the lower energy loss region but g radually approaches the theoretical bulk energy loss function in the higher ener gy loss region. Moreover, the intensity corresponding to surface excitation in e ffective energy loss functions decreases with the increase of primary electron e nergy. These facts show that the present effective energy loss function describe s not only surface excitation but also bulk excitation. At last, REELS spectra s imulated by Monte Carlo method based on use of the effective energy loss functio ns has reproduced the experimental REELS spectra with considerable success.
基金This is a Plenary Report on the International Symposium on Approximation Theory and Remote SensingApplications held in Kunming, China in April 2006Supported in part by NSF of China under grants 10571010 , 10171007 and Startup Grant for Doctoral Researchof Beijing University of Technology
文摘Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.
文摘Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given.
文摘We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs, CdS, SiO2, ZnS, ZnSe, ZnTe) for application to surface electron spectroscopy analysis. The experimental energy-loss function, which is derived from measured optical data, is fitted into a finite sum of formula based on the Drude-Lindhard dielectric model. By checking the oscillator strength-sum and perfect- screening-sum rules, we have validated the high accuracy of the fitting results. Further-more, based on the fitted parameters, the simulated reflection electron energy-loss spec- troscopy (REELS) spectrum shows a good agreement with experiment. The calculated fitting parameters of energy loss function are stored in an open and online database at http://micro.ustc.edu.cn/ELF/ELF.html.
基金Supported by the National Natural Science Foundation of China(11271189) Supported by the Scientific Research Innovation Project of Jiangsu Province(KYZZ116_0175)
文摘In this paper, MLINEX loss function was considered to solve the problem of high premium in credibility models. The Bayes premium and credibility premium were obtained under MLINEX loss function by using a symmetric quadratic loss function. A credibility model with multiple contracts was established and the corresponding credibility estimator was derived under MLINEX loss function. For this model the estimations of the structure parameters and a numerical example were also given.
基金This work was supported in part by the National Natural Science Foundation of China(Grant No.41875184)Innovation Team of“Six Talent Peaks”In Jiangsu Province(Grant No.TD-XYDXX-004).
文摘With the continuous development of face recognition network,the selection of loss function plays an increasingly important role in improving accuracy.The loss function of face recognition network needs to minimize the intra-class distance while expanding the inter-class distance.So far,one of our mainstream loss function optimization methods is to add penalty terms,such as orthogonal loss,to further constrain the original loss function.The other is to optimize using the loss based on angular/cosine margin.The last is Triplet loss and a new type of joint optimization based on HST Loss and ACT Loss.In this paper,based on the three methods with good practical performance and the joint optimization method,various loss functions are thoroughly reviewed.
基金supported by the following funds:Basic Research Program of Qinghai Province under Grants No.2020-ZJ-709National Key R&D Program of China (2018YFF01010100)+1 种基金Natural Science Foundation of Beijing (4212001)Advanced information network Beijing laboratory (PXM2019_014204_500029).
文摘Plateau forest plays an important role in the high-altitude ecosystem,and contributes to the global carbon cycle.Plateau forest monitoring request in-suit data from field investigation.With recent development of the remote sensing technic,large-scale satellite data become available for surface monitoring.Due to the various information contained in the remote sensing data,obtain accurate plateau forest segmentation from the remote sensing imagery still remain challenges.Recent developed deep learning(DL)models such as deep convolutional neural network(CNN)has been widely used in image processing tasks,and shows possibility for remote sensing segmentation.However,due to the unique characteristics and growing environment of the plateau forest,generate feature with high robustness needs to design structures with high robustness.Aiming at the problem that the existing deep learning segmentation methods are difficult to generate the accurate boundary of the plateau forest within the satellite imagery,we propose a method of using boundary feature maps for collaborative learning.There are three improvements in this article.First,design a multi input model for plateau forest segmentation,including the boundary feature map as an additional input label to increase the amount of information at the input.Second,we apply a strong boundary search algorithm to obtain boundary value,and propose a boundary value loss function.Third,improve the Unet segmentation network and combine dense block to improve the feature reuse ability and reduces the image information loss of the model during training.We then demonstrate the utility of our method by detecting plateau forest regions from ZY-3 satellite regarding to Sanjiangyuan nature reserve.The experimental results show that the proposed method can utilize multiple feature information comprehensively which is beneficial to extracting information from boundary,and the detection accuracy is generally higher than several state-of-art algorithms.As a result of this investigation,the study will contribute in several ways to our understanding of DL for region detection and will provide a basis for further researches.
文摘Prakash and Singh presented the shrinkage testimators under the invariant version of LINEX loss function for the scale parameter of an exponential distribution in presence Type-II censored data. In this paper, we extend this approach to gamma distribution, as Prakash and Singh’s paper is a special case of this paper. In fact, some shrinkage testimators for the scale parameter of a gamma distribution, when Type-II censored data are available, have been suggested under the LINEX loss function assuming the shape parameter is to be known. The comparisons of the proposed testimators have been made with improved estimator. All these estimators are compared empirically using Monte Carlo simulation.
文摘Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function by using its intensity function. The Bayesian analysis applicability to the Power Law Process is justified using real software failure times. The choice of a loss function is an important entity of the Bayesian settings. The analytical estimate of likelihood-based Bayesian reliability estimates of the Power Law Process under the squared error and Higgins-Tsokos loss functions were obtained for different prior knowledge of its key parameter. As a result of a simulation analysis and using real data, the Bayesian reliability estimate under the Higgins-Tsokos loss function not only is robust as the Bayesian reliability estimate under the squared error loss function but also performed better, where both are superior to the maximum likelihood reliability estimate. A sensitivity analysis resulted in the Bayesian estimate of the reliability function being sensitive to the prior, whether parametric or non-parametric, and to the loss function. An interactive user interface application was additionally developed using Wolfram language to compute and visualize the Bayesian and maximum likelihood estimates of the intensity and reliability functions of the Power Law Process for a given data.
基金supported by the National Natural Science Foundation of China under Grant Nos.12275085,12235007,and 12175069Science and Technology Commission of Shanghai Municipality under Grant Nos.21JC1402500 and 22DZ2229014.
文摘The multiple patterns of internal solitary wave interactions(ISWI)are a complex oceanic phenomenon.Satellite remote sensing techniques indirectly detect these ISWI,but do not provide information on their detailed structure and dynamics.Recently,the authors considered a three-layer fluid with shear flow and developed a(2+1)Kadomtsev-Petviashvili(KP)model that is capable of describing five types of oceanic ISWI,including O-type,P-type,TO-type,TP-type,and Y-shaped.Deep learning models,particularly physics-informed neural networks(PINN),are widely used in the field of fluids and internal solitary waves.However,the authors find that the amplitude of internal solitary waves is much smaller than the wavelength and the ISWI occur at relatively large spatial scales,and these characteristics lead to an imbalance in the loss function of the PINN model.To solve this problem,the authors introduce two weighted loss function methods,the fixed weighing and the adaptive weighting methods,to improve the PINN model.This successfully simulated the detailed structure and dynamics of ISWI,with simulation results corresponding to the satellite images.In particular,the adaptive weighting method can automatically update the weights of different terms in the loss function and outperforms the fixed weighting method in terms of generalization ability.
文摘In this paper, the isogeometric analysis (IGA) is employed to develop an acoustic radiation model for a double plate-acoustic cavity coupling system, with a focus on analyzing the sound transmission loss (STL). The functionally graded (FG) plate exhibits a different material properties in-plane, and the power-law rule is adopted as the governing principle for material mixing. To validate the harmonic response and demonstrate the accuracy and convergence of the isogeometric modeling, ANASYS is utilized to compare with numerical examples. A plane wave serves as the acoustic excitation, and the Rayleigh integral is applied to discretize the radiated plate. The STL results are compared with the literature, confirming the reliability of the coupling system. Finally, the investigation is conducted to study impact of cavity depth and power-law parameter on the STL.
文摘Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,disease diagnosis,face detection and so on.The loss function is the core research content of SVM,and its variational properties play an important role in the analysis of optimality conditions,the design of optimization algorithms,the representation of support vectors and the research of dual problems.This paper summarizes and analyzes the 0-1 loss function and its eighteen popular surrogate loss functions in SVM,and gives three variational properties of these loss functions:subdifferential,proximal operator and Fenchel conjugate,where the nine proximal operators and fifteen Fenchel conjugates are given by this paper.
文摘The basic purpose of a quality loss function is to evaluate a loss to customers in a quantitativemanner.Although there are several multivariate loss functions that have been proposed and studied inthe literature,it has room for improvement.A good multivariate loss function should represent anappropriate compromise in terms of both process economics and the correlation structure amongvarious responses.More important,it should be easily understood and implemented in practice.According to this criterion,we first introduce a pragmatic dimensionless multivariate loss functionproposed by Artiles-Leon,then we improve the multivariate loss function in two respects:one ismaking it suitable for all three types of quality characteristics;the other is considering correlationstructure among the various responses,which makes the improved multivariate loss function moreadequate in the real world.On the bases of these,an example from industrial practice is provided tocompare our improved method with other methods,and last,some reviews are presented inconclusion.
基金supported by the Excellent Youth Talents Foundation of University of Anhui (Grant Nos.2011SQRL127 and 2012SQRL028ZD)
文摘A generalization of Zellner’s balanced loss function is proposed. General admissibility in a general multivariate linear model is investigated under the generalized balanced loss function. And the sufficient and necessary conditions for linear estimators to be generally admissible in classes of homogeneous and nonhomogeneous linear estimators are given, respectively.
基金supported by the Beijing Natural Science Foundation(4222031)the National Natural Science Foundation of China(61976024,61972048)Beijing University of Posts and Telecommunications(BUPT)Innovation and Entrepreneurship Support Program(2021-YC-A206)
文摘Soft margin support vector machine(SVM)with hinge loss function is an important classification algorithm,which has been widely used in image recognition,text classification and so on.However,solving soft margin SVM with hinge loss function generally entails the sub-gradient projection algorithm,which is very time-consuming when processing big training data set.To achieve it,an efficient quantum algorithm is proposed.Specifically,this algorithm implements the key task of the sub-gradient projection algorithm to obtain the classical sub-gradients in each iteration,which is mainly based on quantum amplitude estimation and amplification algorithm and the controlled rotation operator.Compared with its classical counterpart,this algorithm has a quadratic speedup on the number of training data points.It is worth emphasizing that the optimal model parameters obtained by this algorithm are in the classical form rather than in the quantum state form.This enables the algorithm to classify new data at little cost when the optimal model parameters are determined.
文摘Purpose-We propose a Machine Learning(ML)approach that will be trained from the available financial data and is able to gain the trends over the data and then uses the acquired knowledge for a more accurate forecasting of financial series.This work will provide a more precise results when weighed up to aged financial series forecasting algorithms.The LSTM Classic will be used to forecast the momentum of the Financial Series Index and also applied to its commodities.The network will be trained and evaluated for accuracy with various sizes of data sets,i.e.weekly historical data of MCX,GOLD,COPPER and the results will be calculated.Design/methodology/approach-Desirable LSTM model for script price forecasting from the perspective of minimizing MSE.The approach which we have followed is shown below.(1)Acquire the Dataset.(2)Define your training and testing columns in the dataset.(3)Transform the input value using scalar.(4)Define the custom loss function.(5)Build and Compile the model.(6)Visualise the improvements in results.Findings-Financial series is one of the very aged techniques where a commerce person would commerce financial scripts,make business and earn some wealth from these companies that vend a part of their business on trading manifesto.Forecasting financial script prices is complex tasks that consider extensive human-computer interaction.Due to the correlated nature of financial series prices,conventional batch processing methods like an artificial neural network,convolutional neural network,cannot be utilised efficiently for financial market analysis.We propose an online learning algorithm that utilises an upgraded of recurrent neural networks called long short-term memory Classic(LSTM).The LSTM Classic is quite different from normal LSTM as it has customised loss function in it.This LSTM Classic avoids long-term dependence on its metrics issues because of its unique internal storage unit structure,and it helps forecast financial time series.Financial Series Index is the combination of various commodities(time series).This makes Financial Index more reliable than the financial time series as it does not show a drastic change in its value even some of its commodities are affected.This work will provide a more precise results when weighed up to aged financial series forecasting algorithms.Originality/value-We had built the customised loss function model by using LSTM scheme and have experimented on MCX index and as well as on its commodities and improvements in results are calculated for every epoch that we run for the whole rows present in the dataset.For every epoch we can visualise the improvements in loss.One more improvement that can be done to our model that the relationship between price difference and directional loss is specific to other financial scripts.Deep evaluations can be done to identify the best combination of these for a particular stock to obtain better results.