The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology pro...The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology provided by deep learning-based video surveillance for unmanned inspection of electrical equipment,this paper uses the bottleneck attention module(BAM)attention mechanism to improve the Solov2 model and proposes a new electrical equipment segmentation mode.Firstly,the BAM attention mechanism is integrated into the feature extraction network to adaptively learn the correlation between feature channels,thereby improving the expression ability of the feature map;secondly,the weighted sum of CrossEntropy Loss and Dice loss is designed as the mask loss to improve the segmentation accuracy and robustness of the model;finally,the non-maximal suppression(NMS)algorithm to better handle the overlap problem in instance segmentation.Experimental results show that the proposed method achieves an average segmentation accuracy of mAP of 80.4% on three types of electrical equipment datasets,including transformers,insulators and voltage transformers,which improve the detection accuracy by more than 5.7% compared with the original Solov2 model.The segmentation model proposed can provide a focusing technical means for the intelligent management of power systems.展开更多
In this paper, the isogeometric analysis (IGA) is employed to develop an acoustic radiation model for a double plate-acoustic cavity coupling system, with a focus on analyzing the sound transmission loss (STL). The fu...In this paper, the isogeometric analysis (IGA) is employed to develop an acoustic radiation model for a double plate-acoustic cavity coupling system, with a focus on analyzing the sound transmission loss (STL). The functionally graded (FG) plate exhibits a different material properties in-plane, and the power-law rule is adopted as the governing principle for material mixing. To validate the harmonic response and demonstrate the accuracy and convergence of the isogeometric modeling, ANASYS is utilized to compare with numerical examples. A plane wave serves as the acoustic excitation, and the Rayleigh integral is applied to discretize the radiated plate. The STL results are compared with the literature, confirming the reliability of the coupling system. Finally, the investigation is conducted to study impact of cavity depth and power-law parameter on the STL.展开更多
LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quad...LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quadratic loss function in most of classical credibility models. The Bayes premium and the credibility premium are derived under LINEX loss function. The consistency of Bayes premium and credibility premium were also checked. Finally, the simulation was introduced to show the differences between the credibility estimator we derived and the classical one.展开更多
The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence...The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.展开更多
AIM: To explore the effects and mechanism of action of antidepressant mirtazapine in functional dyspepsia(FD) patients with weight loss.METHODS: Sixty depressive FD patients with weight loss were randomly divided into...AIM: To explore the effects and mechanism of action of antidepressant mirtazapine in functional dyspepsia(FD) patients with weight loss.METHODS: Sixty depressive FD patients with weight loss were randomly divided into a mirtazapine group(MG), a paroxetine group(PG) or a conventional therapy group(CG) for an 8-wk clinical trial. Adverse effects and treatment response were recorded. The Nepean Dyspepsia Index-symptom(NDSI) checklist and the 17-item Hamilton Rating Scale of Depression(HAMD-17) were used to evaluate dyspepsia and depressive symptoms, respectively. The body composition analyzer was used to measure body weight and fat. Serum hormone levels were measured by ELISA.RESULTS:(1) After 2 wk of treatment, NDSI scores were significantly lower for the MG than for the PG and CG;(2) After 4 or 8 wk of treatment, HAMD-17 scores were significantly lower for the MG and PG than for the CG;(3) After 8 wk of treatment, patients in the MG experienced a weight gain of 3.58 ± 1.57 kg, which was significantly higher than that observed for patients in the PG and CG. Body fat increased by 2.77 ± 0.14kg, the body fat ratio rose by 4%, and the visceral fat area increased by 7.56 ± 2.25 cm2; and(4) For the MG, serum hormone levels of ghrelin, neuropeptide Y(NPY), motilin(MTL) and gastrin(GAS) were significantly upregulated; in contrast, those of leptin, 5-hydroxytryptamine(5-HT) and cholecystokinin(CCK) were significantly downregulated. CONCLUSION: Mirtazapine not only alleviates symptoms associated with dyspepsia and depression linked to FD in patients with weight loss but also significantly increases body weight(mainly the visceral fat in body fat). The likely mechanism of mirtazapine action is regulation of brain-gut or gastrointestinal hormone levels.展开更多
Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other m...Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other model parameters,so the economic design method is often not effective in producing charts that can quickly detect small shifts before substantial losses occur;on the other hand,in many cases,only one type of process shift or only one pair of process shifts are taken into consideration,which may not correctly reflect the actual process conditions.To improve the behavior of economic design of control chart,a cost & loss model with Taguchi's loss function for the economic design of X & S control charts is embellished,which is regarded as an optimization problem with multiple statistical constraints.The optimization design is also carried out based on a number of combinations of process shifts collected from the field operation of the conventional control charts,thus more hidden information about the shift combinations is mined and employed to the optimization design of control charts.At the same time,an improved particle swarm optimization(IPSO) is developed to solve such an optimization problem in design of X & S control charts,IPSO is first tested for several benchmark problems from the literature and evaluated with standard performance metrics.Experimental results show that the proposed algorithm has significant advantages on obtaining the optimal design parameters of the charts.The proposed method can substantially reduce the total cost(or loss) of the control charts,and it will be a promising tool for economic design of control charts.展开更多
The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is ...The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is very close in value to th e theoretical surface energy loss function in the lower energy loss region but g radually approaches the theoretical bulk energy loss function in the higher ener gy loss region. Moreover, the intensity corresponding to surface excitation in e ffective energy loss functions decreases with the increase of primary electron e nergy. These facts show that the present effective energy loss function describe s not only surface excitation but also bulk excitation. At last, REELS spectra s imulated by Monte Carlo method based on use of the effective energy loss functio ns has reproduced the experimental REELS spectra with considerable success.展开更多
Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be di...Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.展开更多
Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss...Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given.展开更多
We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs,...We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs, CdS, SiO2, ZnS, ZnSe, ZnTe) for application to surface electron spectroscopy analysis. The experimental energy-loss function, which is derived from measured optical data, is fitted into a finite sum of formula based on the Drude-Lindhard dielectric model. By checking the oscillator strength-sum and perfect- screening-sum rules, we have validated the high accuracy of the fitting results. Further-more, based on the fitted parameters, the simulated reflection electron energy-loss spec- troscopy (REELS) spectrum shows a good agreement with experiment. The calculated fitting parameters of energy loss function are stored in an open and online database at http://micro.ustc.edu.cn/ELF/ELF.html.展开更多
In this paper, MLINEX loss function was considered to solve the problem of high premium in credibility models. The Bayes premium and credibility premium were obtained under MLINEX loss function by using a symmetric qu...In this paper, MLINEX loss function was considered to solve the problem of high premium in credibility models. The Bayes premium and credibility premium were obtained under MLINEX loss function by using a symmetric quadratic loss function. A credibility model with multiple contracts was established and the corresponding credibility estimator was derived under MLINEX loss function. For this model the estimations of the structure parameters and a numerical example were also given.展开更多
With the continuous development of face recognition network,the selection of loss function plays an increasingly important role in improving accuracy.The loss function of face recognition network needs to minimize the...With the continuous development of face recognition network,the selection of loss function plays an increasingly important role in improving accuracy.The loss function of face recognition network needs to minimize the intra-class distance while expanding the inter-class distance.So far,one of our mainstream loss function optimization methods is to add penalty terms,such as orthogonal loss,to further constrain the original loss function.The other is to optimize using the loss based on angular/cosine margin.The last is Triplet loss and a new type of joint optimization based on HST Loss and ACT Loss.In this paper,based on the three methods with good practical performance and the joint optimization method,various loss functions are thoroughly reviewed.展开更多
Plateau forest plays an important role in the high-altitude ecosystem,and contributes to the global carbon cycle.Plateau forest monitoring request in-suit data from field investigation.With recent development of the r...Plateau forest plays an important role in the high-altitude ecosystem,and contributes to the global carbon cycle.Plateau forest monitoring request in-suit data from field investigation.With recent development of the remote sensing technic,large-scale satellite data become available for surface monitoring.Due to the various information contained in the remote sensing data,obtain accurate plateau forest segmentation from the remote sensing imagery still remain challenges.Recent developed deep learning(DL)models such as deep convolutional neural network(CNN)has been widely used in image processing tasks,and shows possibility for remote sensing segmentation.However,due to the unique characteristics and growing environment of the plateau forest,generate feature with high robustness needs to design structures with high robustness.Aiming at the problem that the existing deep learning segmentation methods are difficult to generate the accurate boundary of the plateau forest within the satellite imagery,we propose a method of using boundary feature maps for collaborative learning.There are three improvements in this article.First,design a multi input model for plateau forest segmentation,including the boundary feature map as an additional input label to increase the amount of information at the input.Second,we apply a strong boundary search algorithm to obtain boundary value,and propose a boundary value loss function.Third,improve the Unet segmentation network and combine dense block to improve the feature reuse ability and reduces the image information loss of the model during training.We then demonstrate the utility of our method by detecting plateau forest regions from ZY-3 satellite regarding to Sanjiangyuan nature reserve.The experimental results show that the proposed method can utilize multiple feature information comprehensively which is beneficial to extracting information from boundary,and the detection accuracy is generally higher than several state-of-art algorithms.As a result of this investigation,the study will contribute in several ways to our understanding of DL for region detection and will provide a basis for further researches.展开更多
In this paper, we show that many risk measures arising in Actuarial Sciences, Finance, Medicine, Welfare analysis, etc. are gathered in classes of Weighted Mean Loss or Gain (WMLG) statistics. Some of them are Upper T...In this paper, we show that many risk measures arising in Actuarial Sciences, Finance, Medicine, Welfare analysis, etc. are gathered in classes of Weighted Mean Loss or Gain (WMLG) statistics. Some of them are Upper Threshold Based (UTH) or Lower Threshold Based (LTH). These statistics may be time-dependent when the scene is monitored in the time and depend on specific functions w and d. This paper provides time-dependent and uniformly functional weak asymptotic laws that allow temporal and spatial studies of the risk as well as comparison among statistics in terms of dependence and mutual influence. The results are particularized for usual statistics like the Kakwani and Shorrocks ones that are mainly used in welfare analysis. Data-driven applications based on pseudo-panel data are provided.展开更多
Prakash and Singh presented the shrinkage testimators under the invariant version of LINEX loss function for the scale parameter of an exponential distribution in presence Type-II censored data. In this paper, we exte...Prakash and Singh presented the shrinkage testimators under the invariant version of LINEX loss function for the scale parameter of an exponential distribution in presence Type-II censored data. In this paper, we extend this approach to gamma distribution, as Prakash and Singh’s paper is a special case of this paper. In fact, some shrinkage testimators for the scale parameter of a gamma distribution, when Type-II censored data are available, have been suggested under the LINEX loss function assuming the shape parameter is to be known. The comparisons of the proposed testimators have been made with improved estimator. All these estimators are compared empirically using Monte Carlo simulation.展开更多
Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function ...Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function by using its intensity function. The Bayesian analysis applicability to the Power Law Process is justified using real software failure times. The choice of a loss function is an important entity of the Bayesian settings. The analytical estimate of likelihood-based Bayesian reliability estimates of the Power Law Process under the squared error and Higgins-Tsokos loss functions were obtained for different prior knowledge of its key parameter. As a result of a simulation analysis and using real data, the Bayesian reliability estimate under the Higgins-Tsokos loss function not only is robust as the Bayesian reliability estimate under the squared error loss function but also performed better, where both are superior to the maximum likelihood reliability estimate. A sensitivity analysis resulted in the Bayesian estimate of the reliability function being sensitive to the prior, whether parametric or non-parametric, and to the loss function. An interactive user interface application was additionally developed using Wolfram language to compute and visualize the Bayesian and maximum likelihood estimates of the intensity and reliability functions of the Power Law Process for a given data.展开更多
Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancem...Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancement.However,the currently prevalent loss functions assign equal weight to each pixel point during loss calculation,which hampers the ability to reflect the roles of different pixel points and fails to exploit the image’s characteristics fully.To address this issue,this study proposes an asymmetric loss function based on the image and data characteristics of the image recovery task.This novel loss function can adjust the weight of the reconstruction loss based on the grey value of different pixel points,thereby effectively optimizing the network training by differentially utilizing the grey information from the original image.Specifically,we calculate a weight factor for each pixel point based on its grey value and combine it with the reconstruction loss to create a new loss function.This ensures that pixel points with smaller grey values receive greater attention,improving network recovery.In order to verify the effectiveness of the proposed asymmetric loss function,we conducted experimental tests in the image super-resolution task.The experimental results show that the model with the introduction of asymmetric loss weights improves all the indexes of the processing results without increasing the training time.In the typical super-resolution network SRCNN,by introducing asymmetric weights,it is possible to improve the peak signal-to-noise ratio(PSNR)by up to about 0.5%,the structural similarity index(SSIM)by up to about 0.3%,and reduce the root-mean-square error(RMSE)by up to about 1.7%with essentially no increase in training time.In addition,we also further tested the performance of the proposed method in the denoising task to verify the potential applicability of the method in the image restoration task.展开更多
A probabilistic seismic loss assessment of RC high-rise(RCHR)buildings designed according to Eurocode 8 and located in the Southern Euro-Mediterranean zone is presented herein.The loss assessment methodology is based ...A probabilistic seismic loss assessment of RC high-rise(RCHR)buildings designed according to Eurocode 8 and located in the Southern Euro-Mediterranean zone is presented herein.The loss assessment methodology is based on a comprehensive simulation approach which takes into account ground motion(GM)uncertainty,and the random effects in seismic demand,as well as in predicting the damage states(DSs).The methodology is implemented on three RCHR buildings of 20-story,30-story and 40-story with a core wall structural system.The loss functions described by a cumulative lognormal probability distribution are obtained for two intensity levels for a large set of simulations(NLTHAs)based on 60 GM records with a wide range of magnitude(M),distance to source(R)and different site soil conditions(SS).The losses expressed in percent of building replacement cost for RCHR buildings are obtained.In the estimation of losses,both structural(S)and nonstructural(NS)damage for four DSs are considered.The effect of different GM characteristics(M,R and SS)on the obtained losses are investigated.Finally,the estimated performance of the RCHR buildings are checked to ensure that they fulfill limit state requirements according to Eurocode 8.展开更多
基金Jilin Science and Technology Development Plan Project(No.20200403075SF)Doctoral Research Start-Up Fund of Northeast Electric Power University(No.BSJXM-2018202).
文摘The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology provided by deep learning-based video surveillance for unmanned inspection of electrical equipment,this paper uses the bottleneck attention module(BAM)attention mechanism to improve the Solov2 model and proposes a new electrical equipment segmentation mode.Firstly,the BAM attention mechanism is integrated into the feature extraction network to adaptively learn the correlation between feature channels,thereby improving the expression ability of the feature map;secondly,the weighted sum of CrossEntropy Loss and Dice loss is designed as the mask loss to improve the segmentation accuracy and robustness of the model;finally,the non-maximal suppression(NMS)algorithm to better handle the overlap problem in instance segmentation.Experimental results show that the proposed method achieves an average segmentation accuracy of mAP of 80.4% on three types of electrical equipment datasets,including transformers,insulators and voltage transformers,which improve the detection accuracy by more than 5.7% compared with the original Solov2 model.The segmentation model proposed can provide a focusing technical means for the intelligent management of power systems.
文摘In this paper, the isogeometric analysis (IGA) is employed to develop an acoustic radiation model for a double plate-acoustic cavity coupling system, with a focus on analyzing the sound transmission loss (STL). The functionally graded (FG) plate exhibits a different material properties in-plane, and the power-law rule is adopted as the governing principle for material mixing. To validate the harmonic response and demonstrate the accuracy and convergence of the isogeometric modeling, ANASYS is utilized to compare with numerical examples. A plane wave serves as the acoustic excitation, and the Rayleigh integral is applied to discretize the radiated plate. The STL results are compared with the literature, confirming the reliability of the coupling system. Finally, the investigation is conducted to study impact of cavity depth and power-law parameter on the STL.
基金Supported by the NNSF of China(71001046)Supported by the NSF of Jiangxi Province(20114BAB211004)
文摘LINEX(linear and exponential) loss function is a useful asymmetric loss function. The purpose of using a LINEX loss function in credibility models is to solve the problem of very high premium by suing a symmetric quadratic loss function in most of classical credibility models. The Bayes premium and the credibility premium are derived under LINEX loss function. The consistency of Bayes premium and credibility premium were also checked. Finally, the simulation was introduced to show the differences between the credibility estimator we derived and the classical one.
文摘The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.
文摘AIM: To explore the effects and mechanism of action of antidepressant mirtazapine in functional dyspepsia(FD) patients with weight loss.METHODS: Sixty depressive FD patients with weight loss were randomly divided into a mirtazapine group(MG), a paroxetine group(PG) or a conventional therapy group(CG) for an 8-wk clinical trial. Adverse effects and treatment response were recorded. The Nepean Dyspepsia Index-symptom(NDSI) checklist and the 17-item Hamilton Rating Scale of Depression(HAMD-17) were used to evaluate dyspepsia and depressive symptoms, respectively. The body composition analyzer was used to measure body weight and fat. Serum hormone levels were measured by ELISA.RESULTS:(1) After 2 wk of treatment, NDSI scores were significantly lower for the MG than for the PG and CG;(2) After 4 or 8 wk of treatment, HAMD-17 scores were significantly lower for the MG and PG than for the CG;(3) After 8 wk of treatment, patients in the MG experienced a weight gain of 3.58 ± 1.57 kg, which was significantly higher than that observed for patients in the PG and CG. Body fat increased by 2.77 ± 0.14kg, the body fat ratio rose by 4%, and the visceral fat area increased by 7.56 ± 2.25 cm2; and(4) For the MG, serum hormone levels of ghrelin, neuropeptide Y(NPY), motilin(MTL) and gastrin(GAS) were significantly upregulated; in contrast, those of leptin, 5-hydroxytryptamine(5-HT) and cholecystokinin(CCK) were significantly downregulated. CONCLUSION: Mirtazapine not only alleviates symptoms associated with dyspepsia and depression linked to FD in patients with weight loss but also significantly increases body weight(mainly the visceral fat in body fat). The likely mechanism of mirtazapine action is regulation of brain-gut or gastrointestinal hormone levels.
基金supported by Defense Industrial Technology Development Program of China (Grant No. A2520110003)
文摘Much research effort has been devoted to economic design of X & S control charts,however,there are some problems in usual methods.On the one hand,it is difficult to estimate the relationship between costs and other model parameters,so the economic design method is often not effective in producing charts that can quickly detect small shifts before substantial losses occur;on the other hand,in many cases,only one type of process shift or only one pair of process shifts are taken into consideration,which may not correctly reflect the actual process conditions.To improve the behavior of economic design of control chart,a cost & loss model with Taguchi's loss function for the economic design of X & S control charts is embellished,which is regarded as an optimization problem with multiple statistical constraints.The optimization design is also carried out based on a number of combinations of process shifts collected from the field operation of the conventional control charts,thus more hidden information about the shift combinations is mined and employed to the optimization design of control charts.At the same time,an improved particle swarm optimization(IPSO) is developed to solve such an optimization problem in design of X & S control charts,IPSO is first tested for several benchmark problems from the literature and evaluated with standard performance metrics.Experimental results show that the proposed algorithm has significant advantages on obtaining the optimal design parameters of the charts.The proposed method can substantially reduce the total cost(or loss) of the control charts,and it will be a promising tool for economic design of control charts.
基金This work was supported by the National Natural Science Foundation of China(No.10025420,No.20075026,No.60306006 and No.90206009)the post-doctoral fellowship provided by a Grant-in-Aid for Creative Scientific Research of Japanese govermment(No.13GS0022).The authors would also like to thank Dr.H.Yoshikawa,National Institute for Materials Science of Japan,and Dr.T.Nagatomi,Osaka University,for their helpful comments.
文摘The effective energy loss functions for Al have been derived from differential i nverse inelastic mean free path based on the extended Landau approach. It has be en revealed that the effective energy loss function is very close in value to th e theoretical surface energy loss function in the lower energy loss region but g radually approaches the theoretical bulk energy loss function in the higher ener gy loss region. Moreover, the intensity corresponding to surface excitation in e ffective energy loss functions decreases with the increase of primary electron e nergy. These facts show that the present effective energy loss function describe s not only surface excitation but also bulk excitation. At last, REELS spectra s imulated by Monte Carlo method based on use of the effective energy loss functio ns has reproduced the experimental REELS spectra with considerable success.
基金This is a Plenary Report on the International Symposium on Approximation Theory and Remote SensingApplications held in Kunming, China in April 2006Supported in part by NSF of China under grants 10571010 , 10171007 and Startup Grant for Doctoral Researchof Beijing University of Technology
文摘Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.
文摘Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given.
文摘We present a fitting calculation of energy-loss function for 26 bulk materials, including 18 pure elements (Ag, A1, Au, C, Co, Cs, Cu, Er, Fe, Ge, Mg, Mo, Nb, Ni, Pd, Pt, Si, Te) and 8 compounds (AgCl, Al2O3, AlAs, CdS, SiO2, ZnS, ZnSe, ZnTe) for application to surface electron spectroscopy analysis. The experimental energy-loss function, which is derived from measured optical data, is fitted into a finite sum of formula based on the Drude-Lindhard dielectric model. By checking the oscillator strength-sum and perfect- screening-sum rules, we have validated the high accuracy of the fitting results. Further-more, based on the fitted parameters, the simulated reflection electron energy-loss spec- troscopy (REELS) spectrum shows a good agreement with experiment. The calculated fitting parameters of energy loss function are stored in an open and online database at http://micro.ustc.edu.cn/ELF/ELF.html.
基金Supported by the National Natural Science Foundation of China(11271189) Supported by the Scientific Research Innovation Project of Jiangsu Province(KYZZ116_0175)
文摘In this paper, MLINEX loss function was considered to solve the problem of high premium in credibility models. The Bayes premium and credibility premium were obtained under MLINEX loss function by using a symmetric quadratic loss function. A credibility model with multiple contracts was established and the corresponding credibility estimator was derived under MLINEX loss function. For this model the estimations of the structure parameters and a numerical example were also given.
基金This work was supported in part by the National Natural Science Foundation of China(Grant No.41875184)Innovation Team of“Six Talent Peaks”In Jiangsu Province(Grant No.TD-XYDXX-004).
文摘With the continuous development of face recognition network,the selection of loss function plays an increasingly important role in improving accuracy.The loss function of face recognition network needs to minimize the intra-class distance while expanding the inter-class distance.So far,one of our mainstream loss function optimization methods is to add penalty terms,such as orthogonal loss,to further constrain the original loss function.The other is to optimize using the loss based on angular/cosine margin.The last is Triplet loss and a new type of joint optimization based on HST Loss and ACT Loss.In this paper,based on the three methods with good practical performance and the joint optimization method,various loss functions are thoroughly reviewed.
基金supported by the following funds:Basic Research Program of Qinghai Province under Grants No.2020-ZJ-709National Key R&D Program of China (2018YFF01010100)+1 种基金Natural Science Foundation of Beijing (4212001)Advanced information network Beijing laboratory (PXM2019_014204_500029).
文摘Plateau forest plays an important role in the high-altitude ecosystem,and contributes to the global carbon cycle.Plateau forest monitoring request in-suit data from field investigation.With recent development of the remote sensing technic,large-scale satellite data become available for surface monitoring.Due to the various information contained in the remote sensing data,obtain accurate plateau forest segmentation from the remote sensing imagery still remain challenges.Recent developed deep learning(DL)models such as deep convolutional neural network(CNN)has been widely used in image processing tasks,and shows possibility for remote sensing segmentation.However,due to the unique characteristics and growing environment of the plateau forest,generate feature with high robustness needs to design structures with high robustness.Aiming at the problem that the existing deep learning segmentation methods are difficult to generate the accurate boundary of the plateau forest within the satellite imagery,we propose a method of using boundary feature maps for collaborative learning.There are three improvements in this article.First,design a multi input model for plateau forest segmentation,including the boundary feature map as an additional input label to increase the amount of information at the input.Second,we apply a strong boundary search algorithm to obtain boundary value,and propose a boundary value loss function.Third,improve the Unet segmentation network and combine dense block to improve the feature reuse ability and reduces the image information loss of the model during training.We then demonstrate the utility of our method by detecting plateau forest regions from ZY-3 satellite regarding to Sanjiangyuan nature reserve.The experimental results show that the proposed method can utilize multiple feature information comprehensively which is beneficial to extracting information from boundary,and the detection accuracy is generally higher than several state-of-art algorithms.As a result of this investigation,the study will contribute in several ways to our understanding of DL for region detection and will provide a basis for further researches.
文摘In this paper, we show that many risk measures arising in Actuarial Sciences, Finance, Medicine, Welfare analysis, etc. are gathered in classes of Weighted Mean Loss or Gain (WMLG) statistics. Some of them are Upper Threshold Based (UTH) or Lower Threshold Based (LTH). These statistics may be time-dependent when the scene is monitored in the time and depend on specific functions w and d. This paper provides time-dependent and uniformly functional weak asymptotic laws that allow temporal and spatial studies of the risk as well as comparison among statistics in terms of dependence and mutual influence. The results are particularized for usual statistics like the Kakwani and Shorrocks ones that are mainly used in welfare analysis. Data-driven applications based on pseudo-panel data are provided.
文摘Prakash and Singh presented the shrinkage testimators under the invariant version of LINEX loss function for the scale parameter of an exponential distribution in presence Type-II censored data. In this paper, we extend this approach to gamma distribution, as Prakash and Singh’s paper is a special case of this paper. In fact, some shrinkage testimators for the scale parameter of a gamma distribution, when Type-II censored data are available, have been suggested under the LINEX loss function assuming the shape parameter is to be known. The comparisons of the proposed testimators have been made with improved estimator. All these estimators are compared empirically using Monte Carlo simulation.
文摘Reliability analysis is the key to evaluate software’s quality. Since the early 1970s, the Power Law Process, among others, has been used to assess the rate of change of software reliability as time-varying function by using its intensity function. The Bayesian analysis applicability to the Power Law Process is justified using real software failure times. The choice of a loss function is an important entity of the Bayesian settings. The analytical estimate of likelihood-based Bayesian reliability estimates of the Power Law Process under the squared error and Higgins-Tsokos loss functions were obtained for different prior knowledge of its key parameter. As a result of a simulation analysis and using real data, the Bayesian reliability estimate under the Higgins-Tsokos loss function not only is robust as the Bayesian reliability estimate under the squared error loss function but also performed better, where both are superior to the maximum likelihood reliability estimate. A sensitivity analysis resulted in the Bayesian estimate of the reliability function being sensitive to the prior, whether parametric or non-parametric, and to the loss function. An interactive user interface application was additionally developed using Wolfram language to compute and visualize the Bayesian and maximum likelihood estimates of the intensity and reliability functions of the Power Law Process for a given data.
基金supported by the National Natural Science Foundation of China(62201618).
文摘Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial compo-nent of deep learning,the loss function plays a key role in network optimization and performance enhancement.However,the currently prevalent loss functions assign equal weight to each pixel point during loss calculation,which hampers the ability to reflect the roles of different pixel points and fails to exploit the image’s characteristics fully.To address this issue,this study proposes an asymmetric loss function based on the image and data characteristics of the image recovery task.This novel loss function can adjust the weight of the reconstruction loss based on the grey value of different pixel points,thereby effectively optimizing the network training by differentially utilizing the grey information from the original image.Specifically,we calculate a weight factor for each pixel point based on its grey value and combine it with the reconstruction loss to create a new loss function.This ensures that pixel points with smaller grey values receive greater attention,improving network recovery.In order to verify the effectiveness of the proposed asymmetric loss function,we conducted experimental tests in the image super-resolution task.The experimental results show that the model with the introduction of asymmetric loss weights improves all the indexes of the processing results without increasing the training time.In the typical super-resolution network SRCNN,by introducing asymmetric weights,it is possible to improve the peak signal-to-noise ratio(PSNR)by up to about 0.5%,the structural similarity index(SSIM)by up to about 0.3%,and reduce the root-mean-square error(RMSE)by up to about 1.7%with essentially no increase in training time.In addition,we also further tested the performance of the proposed method in the denoising task to verify the potential applicability of the method in the image restoration task.
文摘A probabilistic seismic loss assessment of RC high-rise(RCHR)buildings designed according to Eurocode 8 and located in the Southern Euro-Mediterranean zone is presented herein.The loss assessment methodology is based on a comprehensive simulation approach which takes into account ground motion(GM)uncertainty,and the random effects in seismic demand,as well as in predicting the damage states(DSs).The methodology is implemented on three RCHR buildings of 20-story,30-story and 40-story with a core wall structural system.The loss functions described by a cumulative lognormal probability distribution are obtained for two intensity levels for a large set of simulations(NLTHAs)based on 60 GM records with a wide range of magnitude(M),distance to source(R)and different site soil conditions(SS).The losses expressed in percent of building replacement cost for RCHR buildings are obtained.In the estimation of losses,both structural(S)and nonstructural(NS)damage for four DSs are considered.The effect of different GM characteristics(M,R and SS)on the obtained losses are investigated.Finally,the estimated performance of the RCHR buildings are checked to ensure that they fulfill limit state requirements according to Eurocode 8.