In reliability analysis,the stress-strength model is often used to describe the life of a component which has a random strength(X)and is subjected to a random stress(Y).In this paper,we considered the problem of estim...In reliability analysis,the stress-strength model is often used to describe the life of a component which has a random strength(X)and is subjected to a random stress(Y).In this paper,we considered the problem of estimating the reliability𝑅𝑅=P[Y<X]when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution.The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample,ranked set sampling and median ranked set sampling methods.Four different reliability estimators under median ranked set sampling are derived.Two estimators are obtained when both strength and stress have an odd or an even set size.The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa.The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study.The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample.In general,the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.Keywords:Stress-Strength model,ranked set sampling,median ranked set sampling,maximum likelihood estimation,mean square error.corresponding estimates under ranked set sampling and simple random sample methods.展开更多
This article proposes two new Ranked Set Sampling(RSS)designs for estimating the population parameters:Simple Z Ranked Set Sampling(SZRSS)and Generalized Z Ranked Set Sampling(GZRSS).These designs provide unbiased est...This article proposes two new Ranked Set Sampling(RSS)designs for estimating the population parameters:Simple Z Ranked Set Sampling(SZRSS)and Generalized Z Ranked Set Sampling(GZRSS).These designs provide unbiased estimators for the mean of symmetric distributions.It is shown that for non-uniform symmetric distributions,the estimators of the mean under the suggested designs are more efcient than those obtained by RSS,Simple Random Sampling(SRS),extreme RSS and truncation based RSS designs.Also,the proposed RSS schemes outperform other RSS schemes and provide more efcient estimates than their competitors under imperfect rankings.The suggested mean estimators under perfect and imperfect rankings are more efcient than the linear regression estimator under SRS.Our proposed RSS designs are also extended to cover the estimation of the population median.Real data is used to examine wthe usefulness and efciency of our estimators.展开更多
In this paper,we considered the Length-biased weighted Lomax distribution and constructed new acceptance sampling plans(ASPs)where the life test is assumed to be truncated at a pre-assigned time.For the new suggested ...In this paper,we considered the Length-biased weighted Lomax distribution and constructed new acceptance sampling plans(ASPs)where the life test is assumed to be truncated at a pre-assigned time.For the new suggested ASPs,the tables of the minimum samples sizes needed to assert a specific mean life of the test units are obtained.In addition,the values of the corresponding operating characteristic function and the associated producer’s risks are calculated.Analyses of two real data sets are presented to investigate the applicability of the proposed acceptance sampling plans;one data set contains the first failure of 20 small electric carts,and the other data set contains the failure times of the air conditioning system of an airplane.Comparisons are made between the proposed acceptance sampling plans and some existing acceptance sampling plans considered in this study based on the minimum sample sizes.It is observed that the samples sizes based on the proposed acceptance sampling plans are less than their competitors considered in this study.The suggested acceptance sampling plans are recommended for practitioners in the field.展开更多
Functional statistics is a new technique for dealing with data thatcan be viewed as curves or images. Parallel to this approach, the Near-InfraredReflectance (NIR) spectroscopymethodology has been used in modern chemi...Functional statistics is a new technique for dealing with data thatcan be viewed as curves or images. Parallel to this approach, the Near-InfraredReflectance (NIR) spectroscopymethodology has been used in modern chemistryas a rapid, low-cost, and exact means of assessing an object’s chemicalproperties. In this research, we investigate the quality of corn and cookiedough by analyzing the spectroscopic technique using certain cutting-edgestatistical models. By analyzing spectral data and applying functional modelsto it, we could predict the chemical components of corn and cookie dough.Kernel Functional Classical Estimation (KFCE), Kernel Functional QuantileEstimation (KFQE), Kernel Functional Expectile Estimation (KFEE),Semi-Partial Linear Functional Classical Estimation (SPLFCE), Semi-PartialLinear Functional Quantile Estimation (SPLFQE), and Semi-Partial LinearFunctional Expectile Estimation (SPLFEE) are models used to accuratelyestimate the different quantities present in Corn and Cookie dough. Theselection of these functional models is based on their ability to constructa forecast region with a high level of confidence. We demonstrate that theconsidered models outperform traditional models such as the partial leastsquaresregression and the principal component regression in terms of predictionaccuracy. Furthermore, we show that the proposed models are morerobust than competing models such as SPLFQE and SPLFEE in the sensethat data heterogeneity has no effect on their efficiency.展开更多
Many researchers measure the uncertainty of a random variable using quantile-based entropy techniques.These techniques are useful in engineering applications and have some exceptional characteristics than their distri...Many researchers measure the uncertainty of a random variable using quantile-based entropy techniques.These techniques are useful in engineering applications and have some exceptional characteristics than their distribution function method.Considering order statistics,the key focus of this article is to propose new quantile-based Mathai-Haubold entropy and investigate its characteristics.The divergence measure of theMathai-Haubold is also considered and some of its properties are established.Further,based on order statistics,we propose the residual entropy of the quantile-based Mathai-Haubold and some of its property results are proved.The performance of the proposed quantile-based Mathai-Haubold entropy is investigated by simulation studies.Finally,a real data application is used to compare our proposed quantile-based entropy to the existing quantile entropies.The results reveal the outperformance of our proposed entropy to the other entropies.展开更多
The problem of predicting continuous scalar outcomes from functional predictors has received high levels of interest in recent years in many fields,especially in the food industry.The k-nearest neighbor(k-NN)method of...The problem of predicting continuous scalar outcomes from functional predictors has received high levels of interest in recent years in many fields,especially in the food industry.The k-nearest neighbor(k-NN)method of Near-Infrared Reflectance(NIR)analysis is practical,relatively easy to implement,and becoming one of the most popular methods for conducting food quality based on NIR data.The k-NN is often named k nearest neighbor classifier when it is used for classifying categorical variables,while it is called k-nearest neighbor regression when it is applied for predicting noncategorical variables.The objective of this paper is to use the functional Near-Infrared Reflectance(NIR)spectroscopy approach to predict some chemical components with some modern statistical models based on the kernel and k-Nearest Neighbour procedures.In this paper,three NIR spectroscopy datasets are used as examples,namely Cookie dough,sugar,and tecator data.Specifically,we propose three models for this kind of data which are Functional Nonparametric Regression,Functional Robust Regression,and Functional Relative Error Regression,with both kernel and k-NN approaches to compare between them.The experimental result shows the higher efficiency of k-NN predictor over the kernel predictor.The predictive power of the k-NN method was compared with that of the kernel method,and several real data sets were used to determine the predictive power of both methods.展开更多
Nonparametric(distribution-free)control charts have been introduced in recent years when quality characteristics do not follow a specific distribution.When the sample selection is prohibitively expensive,we prefer ran...Nonparametric(distribution-free)control charts have been introduced in recent years when quality characteristics do not follow a specific distribution.When the sample selection is prohibitively expensive,we prefer ranked-set sampling over simple random sampling because ranked set sampling-based control charts outperform simple random sampling-based control charts.In this study,we proposed a nonparametric homogeneously weighted moving average based on theWilcoxon signed-rank test with ranked set sampling(NPHWMARSS)control chart for detecting shifts in the process location of a continuous and symmetric distribution.Monte Carlo simulations are used to obtain the run length characteristics to evaluate the performance of the proposed NPHWMARSS control chart.The proposed NPHWMARSS control chart’s performance is compared to that of parametric and nonparametric control charts.These control charts include the exponentially weighted moving average(EWMA)control chart,Wilcoxon signed-rank with simple random sampling based the nonparametric EWMA control chart,the nonparametric EWMA sign control chart,Wilcoxon signed-rank with ranked set sampling-based the nonparametric EWMA control chart,and the homogeneously weighted moving average control charts.The findings show that the proposed NPHWMARSS control chart performs better than its competitors,particularly for the small shifts.Finally,an example is presented to demonstrate how the proposed scheme can be implemented practically.展开更多
In estimation theory,the researchers have put their efforts to develop some estimators of population mean which may give more precise results when adopting ordinary least squares(OLS)method or robust regression techni...In estimation theory,the researchers have put their efforts to develop some estimators of population mean which may give more precise results when adopting ordinary least squares(OLS)method or robust regression techniques for estimating regression coefficients.But when the correlation is negative and the outliers are presented,the results can be distorted and the OLS-type estimators may give misleading estimates or highly biased estimates.Hence,this paper mainly focuses on such issues through the use of non-conventional measures of dispersion and a robust estimation method.Precisely,we have proposed generalized estimators by using the ancillary information of non-conventional measures of dispersion(Gini’s mean difference,Downton’s method and probabilityweighted moment)using ordinary least squares and then finally adopting the Huber M-estimation technique on the suggested estimators.The proposed estimators are investigated in the presence of outliers in both situations of negative and positive correlation between study and auxiliary variables.Theoretical comparisons and real data application are provided to show the strength of the proposed generalized estimators.It is found that the proposed generalized Huber-M-type estimators are more efficient than the suggested generalized estimators under the OLS estimation method considered in this study.The new proposed estimators will be useful in the future for data analysis and making decisions.展开更多
At present Bayesian Networks(BN)are being used widely for demonstrating uncertain knowledge in many disciplines,including biology,computer science,risk analysis,service quality analysis,and business.But they suffer fr...At present Bayesian Networks(BN)are being used widely for demonstrating uncertain knowledge in many disciplines,including biology,computer science,risk analysis,service quality analysis,and business.But they suffer from the problem that when the nodes and edges increase,the structure learning difficulty increases and algorithms become inefficient.To solve this problem,heuristic optimization algorithms are used,which tend to find a near-optimal answer rather than an exact one,with particle swarm optimization(PSO)being one of them.PSO is a swarm intelligence-based algorithm having basic inspiration from flocks of birds(how they search for food).PSO is employed widely because it is easier to code,converges quickly,and can be parallelized easily.We use a recently proposed version of PSO called generalized particle swarm optimization(GEPSO)to learn bayesian network structure.We construct an initial directed acyclic graph(DAG)by using the max-min parent’s children(MMPC)algorithm and cross relative average entropy.ThisDAGis used to create a population for theGEPSO optimization procedure.Moreover,we propose a velocity update procedure to increase the efficiency of the algorithmic search process.Results of the experiments show that as the complexity of the dataset increases,our algorithm Bayesian network generalized particle swarm optimization(BN-GEPSO)outperforms the PSO algorithm in terms of the Bayesian information criterion(BIC)score.展开更多
文摘In reliability analysis,the stress-strength model is often used to describe the life of a component which has a random strength(X)and is subjected to a random stress(Y).In this paper,we considered the problem of estimating the reliability𝑅𝑅=P[Y<X]when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution.The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample,ranked set sampling and median ranked set sampling methods.Four different reliability estimators under median ranked set sampling are derived.Two estimators are obtained when both strength and stress have an odd or an even set size.The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa.The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study.The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample.In general,the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.Keywords:Stress-Strength model,ranked set sampling,median ranked set sampling,maximum likelihood estimation,mean square error.corresponding estimates under ranked set sampling and simple random sample methods.
基金The authors extend their appreciation to Deanship of Scientic Research at King Khalid University for funding this work through Research Groups Program under Grant No.R.G.P.2/68/41.I.M.A.and A.I.A.received the grant.
文摘This article proposes two new Ranked Set Sampling(RSS)designs for estimating the population parameters:Simple Z Ranked Set Sampling(SZRSS)and Generalized Z Ranked Set Sampling(GZRSS).These designs provide unbiased estimators for the mean of symmetric distributions.It is shown that for non-uniform symmetric distributions,the estimators of the mean under the suggested designs are more efcient than those obtained by RSS,Simple Random Sampling(SRS),extreme RSS and truncation based RSS designs.Also,the proposed RSS schemes outperform other RSS schemes and provide more efcient estimates than their competitors under imperfect rankings.The suggested mean estimators under perfect and imperfect rankings are more efcient than the linear regression estimator under SRS.Our proposed RSS designs are also extended to cover the estimation of the population median.Real data is used to examine wthe usefulness and efciency of our estimators.
基金funding this work through the Research Groups Program under Grant Number R.G.P.2/68/41.I.A.
文摘In this paper,we considered the Length-biased weighted Lomax distribution and constructed new acceptance sampling plans(ASPs)where the life test is assumed to be truncated at a pre-assigned time.For the new suggested ASPs,the tables of the minimum samples sizes needed to assert a specific mean life of the test units are obtained.In addition,the values of the corresponding operating characteristic function and the associated producer’s risks are calculated.Analyses of two real data sets are presented to investigate the applicability of the proposed acceptance sampling plans;one data set contains the first failure of 20 small electric carts,and the other data set contains the failure times of the air conditioning system of an airplane.Comparisons are made between the proposed acceptance sampling plans and some existing acceptance sampling plans considered in this study based on the minimum sample sizes.It is observed that the samples sizes based on the proposed acceptance sampling plans are less than their competitors considered in this study.The suggested acceptance sampling plans are recommended for practitioners in the field.
基金This work is funded by the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number RGP.2/132/43.
文摘Functional statistics is a new technique for dealing with data thatcan be viewed as curves or images. Parallel to this approach, the Near-InfraredReflectance (NIR) spectroscopymethodology has been used in modern chemistryas a rapid, low-cost, and exact means of assessing an object’s chemicalproperties. In this research, we investigate the quality of corn and cookiedough by analyzing the spectroscopic technique using certain cutting-edgestatistical models. By analyzing spectral data and applying functional modelsto it, we could predict the chemical components of corn and cookie dough.Kernel Functional Classical Estimation (KFCE), Kernel Functional QuantileEstimation (KFQE), Kernel Functional Expectile Estimation (KFEE),Semi-Partial Linear Functional Classical Estimation (SPLFCE), Semi-PartialLinear Functional Quantile Estimation (SPLFQE), and Semi-Partial LinearFunctional Expectile Estimation (SPLFEE) are models used to accuratelyestimate the different quantities present in Corn and Cookie dough. Theselection of these functional models is based on their ability to constructa forecast region with a high level of confidence. We demonstrate that theconsidered models outperform traditional models such as the partial leastsquaresregression and the principal component regression in terms of predictionaccuracy. Furthermore, we show that the proposed models are morerobust than competing models such as SPLFQE and SPLFEE in the sensethat data heterogeneity has no effect on their efficiency.
基金Authors thank and appreciate funding this work by the Deanship of Scientific Research at King Khalid University through the Research Groups Program under the Grant No.(R.G.P.2/82/42).
文摘Many researchers measure the uncertainty of a random variable using quantile-based entropy techniques.These techniques are useful in engineering applications and have some exceptional characteristics than their distribution function method.Considering order statistics,the key focus of this article is to propose new quantile-based Mathai-Haubold entropy and investigate its characteristics.The divergence measure of theMathai-Haubold is also considered and some of its properties are established.Further,based on order statistics,we propose the residual entropy of the quantile-based Mathai-Haubold and some of its property results are proved.The performance of the proposed quantile-based Mathai-Haubold entropy is investigated by simulation studies.Finally,a real data application is used to compare our proposed quantile-based entropy to the existing quantile entropies.The results reveal the outperformance of our proposed entropy to the other entropies.
基金funding this work through the Research Groups Program under Grant Number R.G.P.1/189/41.I.M.A.and M.K.A.received the grant.
文摘The problem of predicting continuous scalar outcomes from functional predictors has received high levels of interest in recent years in many fields,especially in the food industry.The k-nearest neighbor(k-NN)method of Near-Infrared Reflectance(NIR)analysis is practical,relatively easy to implement,and becoming one of the most popular methods for conducting food quality based on NIR data.The k-NN is often named k nearest neighbor classifier when it is used for classifying categorical variables,while it is called k-nearest neighbor regression when it is applied for predicting noncategorical variables.The objective of this paper is to use the functional Near-Infrared Reflectance(NIR)spectroscopy approach to predict some chemical components with some modern statistical models based on the kernel and k-Nearest Neighbour procedures.In this paper,three NIR spectroscopy datasets are used as examples,namely Cookie dough,sugar,and tecator data.Specifically,we propose three models for this kind of data which are Functional Nonparametric Regression,Functional Robust Regression,and Functional Relative Error Regression,with both kernel and k-NN approaches to compare between them.The experimental result shows the higher efficiency of k-NN predictor over the kernel predictor.The predictive power of the k-NN method was compared with that of the kernel method,and several real data sets were used to determine the predictive power of both methods.
基金Funds are available under the Grant No.RGP.2/132/43 at King Khalid University,Kingdom of Saudi Arabia.
文摘Nonparametric(distribution-free)control charts have been introduced in recent years when quality characteristics do not follow a specific distribution.When the sample selection is prohibitively expensive,we prefer ranked-set sampling over simple random sampling because ranked set sampling-based control charts outperform simple random sampling-based control charts.In this study,we proposed a nonparametric homogeneously weighted moving average based on theWilcoxon signed-rank test with ranked set sampling(NPHWMARSS)control chart for detecting shifts in the process location of a continuous and symmetric distribution.Monte Carlo simulations are used to obtain the run length characteristics to evaluate the performance of the proposed NPHWMARSS control chart.The proposed NPHWMARSS control chart’s performance is compared to that of parametric and nonparametric control charts.These control charts include the exponentially weighted moving average(EWMA)control chart,Wilcoxon signed-rank with simple random sampling based the nonparametric EWMA control chart,the nonparametric EWMA sign control chart,Wilcoxon signed-rank with ranked set sampling-based the nonparametric EWMA control chart,and the homogeneously weighted moving average control charts.The findings show that the proposed NPHWMARSS control chart performs better than its competitors,particularly for the small shifts.Finally,an example is presented to demonstrate how the proposed scheme can be implemented practically.
基金The authors extend their appreciation to Deanship of Scientific Research at King Khalid University for funding this work through Research Groups Program under grant number R.G.P.2/82/42.I.M.A.who received the grant,www.kku.edu.sa.
文摘In estimation theory,the researchers have put their efforts to develop some estimators of population mean which may give more precise results when adopting ordinary least squares(OLS)method or robust regression techniques for estimating regression coefficients.But when the correlation is negative and the outliers are presented,the results can be distorted and the OLS-type estimators may give misleading estimates or highly biased estimates.Hence,this paper mainly focuses on such issues through the use of non-conventional measures of dispersion and a robust estimation method.Precisely,we have proposed generalized estimators by using the ancillary information of non-conventional measures of dispersion(Gini’s mean difference,Downton’s method and probabilityweighted moment)using ordinary least squares and then finally adopting the Huber M-estimation technique on the suggested estimators.The proposed estimators are investigated in the presence of outliers in both situations of negative and positive correlation between study and auxiliary variables.Theoretical comparisons and real data application are provided to show the strength of the proposed generalized estimators.It is found that the proposed generalized Huber-M-type estimators are more efficient than the suggested generalized estimators under the OLS estimation method considered in this study.The new proposed estimators will be useful in the future for data analysis and making decisions.
基金The authors extended their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through the Large Groups Project under grant number RGP.2/132/43。
文摘At present Bayesian Networks(BN)are being used widely for demonstrating uncertain knowledge in many disciplines,including biology,computer science,risk analysis,service quality analysis,and business.But they suffer from the problem that when the nodes and edges increase,the structure learning difficulty increases and algorithms become inefficient.To solve this problem,heuristic optimization algorithms are used,which tend to find a near-optimal answer rather than an exact one,with particle swarm optimization(PSO)being one of them.PSO is a swarm intelligence-based algorithm having basic inspiration from flocks of birds(how they search for food).PSO is employed widely because it is easier to code,converges quickly,and can be parallelized easily.We use a recently proposed version of PSO called generalized particle swarm optimization(GEPSO)to learn bayesian network structure.We construct an initial directed acyclic graph(DAG)by using the max-min parent’s children(MMPC)algorithm and cross relative average entropy.ThisDAGis used to create a population for theGEPSO optimization procedure.Moreover,we propose a velocity update procedure to increase the efficiency of the algorithmic search process.Results of the experiments show that as the complexity of the dataset increases,our algorithm Bayesian network generalized particle swarm optimization(BN-GEPSO)outperforms the PSO algorithm in terms of the Bayesian information criterion(BIC)score.