Hearing loss(HL)is a kind of common illness,which can significantly reduce the quality of life.For example,HL often results in mishearing,misunderstanding,and communication problems.Therefore,it is necessary to provid...Hearing loss(HL)is a kind of common illness,which can significantly reduce the quality of life.For example,HL often results in mishearing,misunderstanding,and communication problems.Therefore,it is necessary to provide early diagnosis and timely treatment for HL.This study investigated the advantages and disadvantages of three classical machine learning methods:multilayer perceptron(MLP),support vector machine(SVM),and least-square support vector machine(LS-SVM)approach andmade a further optimization of the LS-SVM model via wavelet entropy.The investigation illustrated that themultilayer perceptron is a shallowneural network,while the least square support vector machine uses hinge loss function and least-square optimizationmethod.Besides,a wavelet selection method was proposed,and we found db4 can achieve the best results.The experiments showed that the LS-SVM method can identify the hearing loss disease with an overall accuracy of three classes as 84.89±1.77,which is superior to SVM andMLP.The results show that the least-square support vector machine is effective in hearing loss identification.展开更多
By exponentiating each of the components of a finite mixture of two exponential components model by a positive parameter, several shapes of hazard rate functions are obtained. Maximum likelihood and Bayes methods, bas...By exponentiating each of the components of a finite mixture of two exponential components model by a positive parameter, several shapes of hazard rate functions are obtained. Maximum likelihood and Bayes methods, based on square error loss function and objective prior, are used to obtain estimators based on balanced square error loss function for the parameters, survival and hazard rate functions of a mixture of two exponentiated exponential components model. Approximate interval estimators of the parameters of the model are obtained.展开更多
The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence...The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.展开更多
The Weibull distribution is regarded as among the finest in the family of failure distributions.One of the most commonly used parameters of the Weibull distribution(WD)is the ordinary least squares(OLS)technique,which...The Weibull distribution is regarded as among the finest in the family of failure distributions.One of the most commonly used parameters of the Weibull distribution(WD)is the ordinary least squares(OLS)technique,which is useful in reliability and lifetime modeling.In this study,we propose an approach based on the ordinary least squares and the multilayer perceptron(MLP)neural network called the OLSMLP that is based on the resilience of the OLS method.The MLP solves the problem of heteroscedasticity that distorts the estimation of the parameters of the WD due to the presence of outliers,and eases the difficulty of determining weights in case of the weighted least square(WLS).Another method is proposed by incorporating a weight into the general entropy(GE)loss function to estimate the parameters of the WD to obtain a modified loss function(WGE).Furthermore,a Monte Carlo simulation is performed to examine the performance of the proposed OLSMLP method in comparison with approximate Bayesian estimation(BLWGE)by using a weighted GE loss function.The results of the simulation showed that the two proposed methods produced good estimates even for small sample sizes.In addition,the techniques proposed here are typically the preferred options when estimating parameters compared with other available methods,in terms of the mean squared error and requirements related to time.展开更多
This paper deals with the Bayesian estimation of Shannon entropy for the generalized inverse exponential distribution.Assuming that the observed samples are taken from the upper record ranked set sampling(URRSS)and up...This paper deals with the Bayesian estimation of Shannon entropy for the generalized inverse exponential distribution.Assuming that the observed samples are taken from the upper record ranked set sampling(URRSS)and upper record values(URV)schemes.Formulas of Bayesian estimators are derived depending on a gamma prior distribution considering the squared error,linear exponential and precautionary loss functions,in addition,we obtain Bayesian credible intervals.The random-walk Metropolis-Hastings algorithm is handled to generate Markov chain Monte Carlo samples from the posterior distribution.Then,the behavior of the estimates is examined at various record values.The output of the study shows that the entropy Bayesian estimates under URRSS are more convenient than the other estimates under URV in the majority of the situations.Also,the entropy Bayesian estimates perform well as the number of records increases.The obtained results validate the usefulness and efficiency of the URV method.Real data is analyzed for more clarifying purposes which validate the theoretical results.展开更多
This paper considers the Bayes and hierarchical Bayes approaches for analyzing clinical data on response times with available values for one or more concomitant variables. Response times are assumed to follow simple e...This paper considers the Bayes and hierarchical Bayes approaches for analyzing clinical data on response times with available values for one or more concomitant variables. Response times are assumed to follow simple exponential distributions, with a different parameter for each patient. The analyses are carried out in case of progressive censoring assuming squared error loss function and gamma distribution as priors and hyperpriors. The possibilities of using the methodology in more general situations like dose- response modeling have also been explored. Bayesian estimators derived in this paper are applied to lung cancer data set with concomitant variables.展开更多
工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小...工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小迭代修复和改进WGAN混合模型的时序数据修复方法.首先,在预处理阶段,保留异常数据,进行信息标注等处理,从而充分挖掘异常值与真实值之间的特征约束.其次,在噪声模块提出了近邻参数裁剪规则,用于修正最小迭代修复公式生成的噪声向量.将其传递至模拟分布模块的生成器中,同时设计了一个动态时间注意力网络层,用于提取时序特征权重并与门控循环单元串联组合捕捉不同步长的特征依赖,并引入递归多步预测原理共同提升模型的表达能力;在判别器中设计了Abnormal and Truth奖励机制和Weighted Mean Square Error损失函数共同反向优化生成器修复数据的细节和质量.最后,在公开数据集和真实数据集上的实验结果表明,该方法的修复准确度与模型稳定性显著优于现有方法.展开更多
基金This research was supported by grants from the Ph.D.Programs Foundation of Henan Polytechnic University(B2016-38).
文摘Hearing loss(HL)is a kind of common illness,which can significantly reduce the quality of life.For example,HL often results in mishearing,misunderstanding,and communication problems.Therefore,it is necessary to provide early diagnosis and timely treatment for HL.This study investigated the advantages and disadvantages of three classical machine learning methods:multilayer perceptron(MLP),support vector machine(SVM),and least-square support vector machine(LS-SVM)approach andmade a further optimization of the LS-SVM model via wavelet entropy.The investigation illustrated that themultilayer perceptron is a shallowneural network,while the least square support vector machine uses hinge loss function and least-square optimizationmethod.Besides,a wavelet selection method was proposed,and we found db4 can achieve the best results.The experiments showed that the LS-SVM method can identify the hearing loss disease with an overall accuracy of three classes as 84.89±1.77,which is superior to SVM andMLP.The results show that the least-square support vector machine is effective in hearing loss identification.
文摘By exponentiating each of the components of a finite mixture of two exponential components model by a positive parameter, several shapes of hazard rate functions are obtained. Maximum likelihood and Bayes methods, based on square error loss function and objective prior, are used to obtain estimators based on balanced square error loss function for the parameters, survival and hazard rate functions of a mixture of two exponentiated exponential components model. Approximate interval estimators of the parameters of the model are obtained.
文摘The deep learning model is overfitted and the accuracy of the test set is reduced when the deep learning model is trained in the network intrusion detection parameters, due to the traditional loss function convergence problem. Firstly, we utilize a network model architecture combining Gelu activation function and deep neural network;Secondly, the cross-entropy loss function is improved to a weighted cross entropy loss function, and at last it is applied to intrusion detection to improve the accuracy of intrusion detection. In order to compare the effect of the experiment, the KDDcup99 data set, which is commonly used in intrusion detection, is selected as the experimental data and use accuracy, precision, recall and F1-score as evaluation parameters. The experimental results show that the model using the weighted cross-entropy loss function combined with the Gelu activation function under the deep neural network architecture improves the evaluation parameters by about 2% compared with the ordinary cross-entropy loss function model. Experiments prove that the weighted cross-entropy loss function can enhance the model’s ability to discriminate samples.
基金The authors are grateful to the Deanship of Scientific Research at Prince Sattam bin Abdulaziz University Supporting Project Number(2020/01/16725),Prince Sattam bin Abdulaziz University,Saudi Arabia.
文摘The Weibull distribution is regarded as among the finest in the family of failure distributions.One of the most commonly used parameters of the Weibull distribution(WD)is the ordinary least squares(OLS)technique,which is useful in reliability and lifetime modeling.In this study,we propose an approach based on the ordinary least squares and the multilayer perceptron(MLP)neural network called the OLSMLP that is based on the resilience of the OLS method.The MLP solves the problem of heteroscedasticity that distorts the estimation of the parameters of the WD due to the presence of outliers,and eases the difficulty of determining weights in case of the weighted least square(WLS).Another method is proposed by incorporating a weight into the general entropy(GE)loss function to estimate the parameters of the WD to obtain a modified loss function(WGE).Furthermore,a Monte Carlo simulation is performed to examine the performance of the proposed OLSMLP method in comparison with approximate Bayesian estimation(BLWGE)by using a weighted GE loss function.The results of the simulation showed that the two proposed methods produced good estimates even for small sample sizes.In addition,the techniques proposed here are typically the preferred options when estimating parameters compared with other available methods,in terms of the mean squared error and requirements related to time.
基金A.R.A.Alanzi would like to thank the Deanship of Scientific Research at Majmaah University for financial support and encouragement.
文摘This paper deals with the Bayesian estimation of Shannon entropy for the generalized inverse exponential distribution.Assuming that the observed samples are taken from the upper record ranked set sampling(URRSS)and upper record values(URV)schemes.Formulas of Bayesian estimators are derived depending on a gamma prior distribution considering the squared error,linear exponential and precautionary loss functions,in addition,we obtain Bayesian credible intervals.The random-walk Metropolis-Hastings algorithm is handled to generate Markov chain Monte Carlo samples from the posterior distribution.Then,the behavior of the estimates is examined at various record values.The output of the study shows that the entropy Bayesian estimates under URRSS are more convenient than the other estimates under URV in the majority of the situations.Also,the entropy Bayesian estimates perform well as the number of records increases.The obtained results validate the usefulness and efficiency of the URV method.Real data is analyzed for more clarifying purposes which validate the theoretical results.
文摘This paper considers the Bayes and hierarchical Bayes approaches for analyzing clinical data on response times with available values for one or more concomitant variables. Response times are assumed to follow simple exponential distributions, with a different parameter for each patient. The analyses are carried out in case of progressive censoring assuming squared error loss function and gamma distribution as priors and hyperpriors. The possibilities of using the methodology in more general situations like dose- response modeling have also been explored. Bayesian estimators derived in this paper are applied to lung cancer data set with concomitant variables.
文摘工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小迭代修复和改进WGAN混合模型的时序数据修复方法.首先,在预处理阶段,保留异常数据,进行信息标注等处理,从而充分挖掘异常值与真实值之间的特征约束.其次,在噪声模块提出了近邻参数裁剪规则,用于修正最小迭代修复公式生成的噪声向量.将其传递至模拟分布模块的生成器中,同时设计了一个动态时间注意力网络层,用于提取时序特征权重并与门控循环单元串联组合捕捉不同步长的特征依赖,并引入递归多步预测原理共同提升模型的表达能力;在判别器中设计了Abnormal and Truth奖励机制和Weighted Mean Square Error损失函数共同反向优化生成器修复数据的细节和质量.最后,在公开数据集和真实数据集上的实验结果表明,该方法的修复准确度与模型稳定性显著优于现有方法.