In recent years,Kriging model has gained wide popularity in various fields such as space geology,econometrics,and computer experiments.As a result,research on this model has proliferated.In this paper,the authors prop...In recent years,Kriging model has gained wide popularity in various fields such as space geology,econometrics,and computer experiments.As a result,research on this model has proliferated.In this paper,the authors propose a model averaging estimation based on the best linear unbiased prediction of Kriging model and the leave-one-out cross-validation method,with consideration for the model uncertainty.The authors present a weight selection criterion for the model averaging estimation and provide two theoretical justifications for the proposed method.First,the estimated weight based on the proposed criterion is asymptotically optimal in achieving the lowest possible prediction risk.Second,the proposed method asymptotically assigns all weights to the correctly specified models when the candidate model set includes these models.The effectiveness of the proposed method is verified through numerical analyses.展开更多
Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced mach...Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced machine learning algorithm.To assess aviation safety and identify the causes of incidents, a classification model with light gradient boosting machine (LGBM)based on the aviation safety reporting system (ASRS) has been developed. It is improved by k-fold cross-validation with hybrid sampling model (HSCV), which may boost classification performance and maintain data balance. The results show that employing the LGBM-HSCV model can significantly improve accuracy while alleviating data imbalance. Vertical comparison with other cross-validation (CV) methods and lateral comparison with different fold times comprise the comparative approach. Aside from the comparison, two further CV approaches based on the improved method in this study are discussed:one with a different sampling and folding order, and the other with more CV. According to the assessment indices with different methods, the LGBMHSCV model proposed here is effective at detecting incident causes. The improved model for imbalanced data categorization proposed may serve as a point of reference for similar data processing, and the model’s accurate identification of civil aviation incident causes can assist to improve civil aviation safety.展开更多
In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues....In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues. In order to assess and compare several strategies, we will conduct a simulation study with 15 predictors and a complex correlation structure in the linear regression model. Using sample sizes of 100 and 400 and estimates of the residual variance corresponding to R2 of 0.50 and 0.71, we consider 4 scenarios with varying amount of information. We also consider two examples with 24 and 13 predictors, respectively. We will discuss the value of cross-validation, shrinkage and backward elimination (BE) with varying significance level. We will assess whether 2-step approaches using global or parameterwise shrinkage (PWSF) can improve selected models and will compare results to models derived with the LASSO procedure. Beside of MSE we will use model sparsity and further criteria for model assessment. The amount of information in the data has an influence on the selected models and the comparison of the procedures. None of the approaches was best in all scenarios. The performance of backward elimination with a suitably chosen significance level was not worse compared to the LASSO and BE models selected were much sparser, an important advantage for interpretation and transportability. Compared to global shrinkage, PWSF had better performance. Provided that the amount of information is not too small, we conclude that BE followed by PWSF is a suitable approach when variable selection is a key part of data analysis.展开更多
For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold ...For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold and truncation parameters are chosen by cross-validation on the everage squared error, strong consistency for the case of dyadic sample size and moment consistency for arbitrary sample size are established under some regular conditions.展开更多
Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,pre...Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,prevention,and treatment.Generalized linear mixed models(GLMM)is an extension of linear model for categorical responses while considering the correlation among observations.Methods Magnetic resonance image(MRI)data of carotid atheroscleroticplaques were acquired from 20 patients with consent obtained and 3D thin-layer models were constructed to calculate plaque stress and strain for plaque progression prediction.Data for ten morphological and biomechanical risk factors included wall thickness(WT),lipid percent(LP),minimum cap thickness(MinCT),plaque area(PA),plaque burden(PB),lumen area(LA),maximum plaque wall stress(MPWS),maximum plaque wall strain(MPWSn),average plaque wall stress(APWS),and average plaque wall strain(APWSn)were extracted from all slices for analysis.Wall thickness increase(WTI),plaque burden increase(PBI)and plaque area increase(PAI) were chosen as three measures for plaque progression.Generalized linear mixed models(GLMM)with 5-fold cross-validation strategy were used to calculate prediction accuracy for each predictor and identify optimal predictor with the highest prediction accuracy defined as sum of sensitivity and specificity.All 201 MRI slices were randomly divided into 4 training subgroups and 1 verification subgroup.The training subgroups were used for model fitting,and the verification subgroup was used to estimate the model.All combinations(total1023)of 10 risk factors were feed to GLMM and the prediction accuracy of each predictor were selected from the point on the ROC(receiver operating characteristic)curve with the highest sum of specificity and sensitivity.Results LA was the best single predictor for PBI with the highest prediction accuracy(1.360 1),and the area under of the ROC curve(AUC)is0.654 0,followed by APWSn(1.336 3)with AUC=0.6342.The optimal predictor among all possible combinations for PBI was the combination of LA,PA,LP,WT,MPWS and MPWSn with prediction accuracy=1.414 6(AUC=0.715 8).LA was once again the best single predictor for PAI with the highest prediction accuracy(1.184 6)with AUC=0.606 4,followed by MPWSn(1. 183 2)with AUC=0.6084.The combination of PA,PB,WT,MPWS,MPWSn and APWSn gave the best prediction accuracy(1.302 5)for PAI,and the AUC value is 0.6657.PA was the best single predictor for WTI with highest prediction accuracy(1.288 7)with AUC=0.641 5,followed by WT(1.254 0),with AUC=0.6097.The combination of PA,PB,WT,LP,MinCT,MPWS and MPWS was the best predictor for WTI with prediction accuracy as 1.314 0,with AUC=0.6552.This indicated that PBI was a more predictable measure than WTI and PAI. The combinational predictors improved prediction accuracy by 9.95%,4.01%and 1.96%over the best single predictors for PAI,PBI and WTI(AUC values improved by9.78%,9.45%,and 2.14%),respectively.Conclusions The use of GLMM with 5-fold cross-validation strategy combining both morphological and biomechanical risk factors could potentially improve the accuracy of carotid plaque progression prediction.This study suggests that a linear combination of multiple predictors can provide potential improvement to existing plaque assessment schemes.展开更多
Bulked-segregant analysis by deep sequencing(BSA-seq) is a widely used method for mapping QTL(quantitative trait loci) due to its simplicity, speed, cost-effectiveness, and efficiency. However, the ability of BSA-seq ...Bulked-segregant analysis by deep sequencing(BSA-seq) is a widely used method for mapping QTL(quantitative trait loci) due to its simplicity, speed, cost-effectiveness, and efficiency. However, the ability of BSA-seq to detect QTL is often limited by inappropriate experimental designs, as evidenced by numerous practical studies. Most BSA-seq studies have utilized small to medium-sized populations, with F2populations being the most common choice. Nevertheless, theoretical studies have shown that using a large population with an appropriate pool size can significantly enhance the power and resolution of QTL detection in BSA-seq, with F_(3)populations offering notable advantages over F2populations. To provide an experimental demonstration, we tested the power of BSA-seq to identify QTL controlling days from sowing to heading(DTH) in a 7200-plant rice F_(3)population in two environments, with a pool size of approximately 500. Each experiment identified 34 QTL, an order of magnitude greater than reported in most BSA-seq experiments, of which 23 were detected in both experiments, with 17 of these located near41 previously reported QTL and eight cloned genes known to control DTH in rice. These results indicate that QTL mapping by BSA-seq in large F_(3)populations and multi-environment experiments can achieve high power, resolution, and reliability.展开更多
Rockburst is a common geological disaster in underground engineering,which seriously threatens the safety of personnel,equipment and property.Utilizing machine learning models to evaluate risk of rockburst is graduall...Rockburst is a common geological disaster in underground engineering,which seriously threatens the safety of personnel,equipment and property.Utilizing machine learning models to evaluate risk of rockburst is gradually becoming a trend.In this study,the integrated algorithms under Gradient Boosting Decision Tree(GBDT)framework were used to evaluate and classify rockburst intensity.First,a total of 301 rock burst data samples were obtained from a case database,and the data were preprocessed using synthetic minority over-sampling technique(SMOTE).Then,the rockburst evaluation models including GBDT,eXtreme Gradient Boosting(XGBoost),Light Gradient Boosting Machine(LightGBM),and Categorical Features Gradient Boosting(CatBoost)were established,and the optimal hyperparameters of the models were obtained through random search grid and five-fold cross-validation.Afterwards,use the optimal hyperparameter configuration to fit the evaluation models,and analyze these models using test set.In order to evaluate the performance,metrics including accuracy,precision,recall,and F1-score were selected to analyze and compare with other machine learning models.Finally,the trained models were used to conduct rock burst risk assessment on rock samples from a mine in Shanxi Province,China,and providing theoretical guidance for the mine's safe production work.The models under the GBDT framework perform well in the evaluation of rockburst levels,and the proposed methods can provide a reliable reference for rockburst risk level analysis and safety management.展开更多
In forest science and practice, the total tree height is one of the basic morphometric attributes at the tree level and it has been closely linked with important stand attributes. In the current research, sixteen nonl...In forest science and practice, the total tree height is one of the basic morphometric attributes at the tree level and it has been closely linked with important stand attributes. In the current research, sixteen nonlinear functions for height prediction were tested in terms of their fitting ability against samples of Abies borisii regis and Pinus sylvestris trees from mountainous forests in central Greece. The fitting procedure was based on generalized nonlinear weighted regression. At the final stage, a five-quantile nonlinear height-diameter model was developed for both species through a quantile regression approach, to estimate the entire conditional distribution of tree height, enabling the evaluation of the diameter impact at various quantiles and providing a comprehensive understanding of the proposed relationship across the distribution. The results clearly showed that employing the diameter as the sole independent variable, the 3-parameter Hossfeld function and the 2-parameter N?slund function managed to explain approximately 84.0% and 81.7% of the total height variance in the case of King Boris fir and Scots pine species, respectively. Furthermore, the models exhibited low levels of error in both cases(2.310m for the fir and 3.004m for the pine), yielding unbiased predictions for both fir(-0.002m) and pine(-0.004m). Notably, all the required assumptions for homogeneity and normality of the associated residuals were achieved through the weighting procedure, while the quantile regression approach provided additional insights into the height-diameter allometry of the specific species. The proposed models can turn into valuable tools for operational forest management planning, particularly for wood production and conservation of mountainous forest ecosystems.展开更多
Parkinson’s disease(PD)is a chronic neurological condition that progresses over time.People start to have trouble speaking,writing,walking,or performing other basic skills as dopamine-generating neurons in some brain...Parkinson’s disease(PD)is a chronic neurological condition that progresses over time.People start to have trouble speaking,writing,walking,or performing other basic skills as dopamine-generating neurons in some brain regions are injured or die.The patient’s symptoms become more severe due to the worsening of their signs over time.In this study,we applied state-of-the-art machine learning algorithms to diagnose Parkinson’s disease and identify related risk factors.The research worked on the publicly available dataset on PD,and the dataset consists of a set of significant characteristics of PD.We aim to apply soft computing techniques and provide an effective solution for medical professionals to diagnose PD accurately.This research methodology involves developing a model using a machine learning algorithm.In the model selection,eight different machine learning techniques were adopted:Namely,Random Forest(RF),Decision Tree(DT),Support Vector Machine(SVM),Naïve Bayes(NB),Light Gradient Boosting Machine(LightGBM),K-Nearest Neighbours(KNN),Extreme Gradient Boosting(XGBoost),and Logistic Regression(LR).Subsequently,the concentrated models were validated through 10-fold Cross-Validation and Receiver Operating Characteristic(ROC)—Area Under the Curve(AUC).In addition,GridSearchCV was utilised to measure each algorithm’s best parameter;eventually,the models were trained through the hyperparameter tuning approach.With 98%accuracy,LightGBM had the highest accuracy in this study.RF,KNN,and SVM came in second with 96%accuracy.Furthermore,the performance scores of NB and LR were recorded to be 76%and 83%,respectively.It is to be mentioned that after applying 10-fold cross-validation,the average performance score of LightGBM accounted for 93%.At the same time,the percentage of ROC-AUC appeared at 0.92,which indicates that this LightGBM model reached a satisfactory level.Finally,we extracted meaningful insights and figured out potential gaps on top of PD.By extracting meaningful insights and identifying potential gaps,our study contributes to the significance and impact of PD research.The application of advanced machine learning algorithms holds promise in accurately diagnosing PD and shedding light on crucial aspects of the disease.This research has the potential to enhance the understanding and management of PD,ultimately improving the lives of individuals affected by this condition.展开更多
Adaptive fractional polynomial modeling of general correlated outcomes is formulated to address nonlinearity in means, variances/dispersions, and correlations. Means and variances/dispersions are modeled using general...Adaptive fractional polynomial modeling of general correlated outcomes is formulated to address nonlinearity in means, variances/dispersions, and correlations. Means and variances/dispersions are modeled using generalized linear models in fixed effects/coefficients. Correlations are modeled using random effects/coefficients. Nonlinearity is addressed using power transforms of primary (untransformed) predictors. Parameter estimation is based on extended linear mixed modeling generalizing both generalized estimating equations and linear mixed modeling. Models are evaluated using likelihood cross-validation (LCV) scores and are generated adaptively using a heuristic search controlled by LCV scores. Cases covered include linear, Poisson, logistic, exponential, and discrete regression of correlated continuous, count/rate, dichotomous, positive continuous, and discrete numeric outcomes treated as normally, Poisson, Bernoulli, exponentially, and discrete numerically distributed, respectively. Example analyses are also generated for these five cases to compare adaptive random effects/coefficients modeling of correlated outcomes to previously developed adaptive modeling based on directly specified covariance structures. Adaptive random effects/coefficients modeling substantially outperforms direct covariance modeling in the linear, exponential, and discrete regression example analyses. It generates equivalent results in the logistic regression example analyses and it is substantially outperformed in the Poisson regression case. Random effects/coefficients modeling of correlated outcomes can provide substantial improvements in model selection compared to directly specified covariance modeling. However, directly specified covariance modeling can generate competitive or substantially better results in some cases while usually requiring less computation time.展开更多
In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate pr...In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.展开更多
Background: A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Predictio...Background: A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model.Methods: Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.Results: Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations.Conclusions: Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.展开更多
The use of machine learning to predict student employability is important in order to analyse a student’s capability to get a job.Based on the results of this type of analysis,university managers can improve the empl...The use of machine learning to predict student employability is important in order to analyse a student’s capability to get a job.Based on the results of this type of analysis,university managers can improve the employability of their students,which can help in attracting students in the future.In addition,learners can focus on the essential skills identified through this analysis during their studies,to increase their employability.An effectivemethod calledOPT-BAG(OPTimisation of BAGging classifiers)was therefore developed to model the problem of predicting the employability of students.This model can help predict the employability of students based on their competencies and can reveal weaknesses that need to be improved.First,we analyse the relationships between several variables and the outcome variable using a correlation heatmap for a student employability dataset.Next,a standard scaler function is applied in the preprocessing module to normalise the variables in the student employability dataset.The training set is then input to our model to identify the optimal parameters for the bagging classifier using a grid search cross-validation technique.Finally,the OPT-BAG model,based on a bagging classifier with optimal parameters found in the previous step,is trained on the training dataset to predict student employability.The empirical outcomes in terms of accuracy,precision,recall,and F1 indicate that the OPT-BAG approach outperforms other cutting-edge machine learning models in terms of predicting student employability.In this study,we also analyse the factors affecting the recruitment process of employers,and find that general appearance,mental alertness,and communication skills are the most important.This indicates that educational institutions should focus on these factors during the learning process to improve student employability.展开更多
Maintenance operations have a critical influence on power gen-eration by wind turbines(WT).Advanced algorithms must analyze large volume of data from condition monitoring systems(CMS)to determine the actual working co...Maintenance operations have a critical influence on power gen-eration by wind turbines(WT).Advanced algorithms must analyze large volume of data from condition monitoring systems(CMS)to determine the actual working conditions and avoid false alarms.This paper proposes different support vector machine(SVM)algorithms for the prediction and detection of false alarms.K-Fold cross-validation(CV)is applied to evaluate the classification reliability of these algorithms.Supervisory Control and Data Acquisition(SCADA)data from an operating WT are applied to test the proposed approach.The results from the quadratic SVM showed an accuracy rate of 98.6%.Misclassifications from the confusion matrix,alarm log and maintenance records are analyzed to obtain quantitative information and determine if it is a false alarm.The classifier reduces the number of false alarms called misclassifications by 25%.These results demonstrate that the proposed approach presents high reliability and accuracy in false alarm identification.展开更多
BACKGROUND Our study expand upon a large body of evidence in the field of neuropsychiatric imaging with cognitive,affective and behavioral tasks,adapted for the functional magnetic resonance imaging(MRI)(fMRI)experime...BACKGROUND Our study expand upon a large body of evidence in the field of neuropsychiatric imaging with cognitive,affective and behavioral tasks,adapted for the functional magnetic resonance imaging(MRI)(fMRI)experimental environment.There is sufficient evidence that common networks underpin activations in task-based fMRI across different mental disorders.AIM To investigate whether there exist specific neural circuits which underpin differ-ential item responses to depressive,paranoid and neutral items(DN)in patients respectively with schizophrenia(SCZ)and major depressive disorder(MDD).METHODS 60 patients were recruited with SCZ and MDD.All patients have been scanned on 3T magnetic resonance tomography platform with functional MRI paradigm,comprised of block design,including blocks with items from diagnostic paranoid(DP),depression specific(DS)and DN from general interest scale.We performed a two-sample t-test between the two groups-SCZ patients and depressive patients.Our purpose was to observe different brain networks which were activated during a specific condition of the task,respectively DS,DP,DN.RESULTS Several significant results are demonstrated in the comparison between SCZ and depressive groups while performing this task.We identified one component that is task-related and independent of condition(shared between all three conditions),composed by regions within the temporal(right superior and middle temporal gyri),frontal(left middle and inferior frontal gyri)and limbic/salience system(right anterior insula).Another com-ponent is related to both diagnostic specific conditions(DS and DP)e.g.It is shared between DEP and SCZ,and includes frontal motor/language and parietal areas.One specific component is modulated preferentially by to the DP condition,and is related mainly to prefrontal regions,whereas other two components are significantly modulated with the DS condition and include clusters within the default mode network such as posterior cingulate and precuneus,several occipital areas,including lingual and fusiform gyrus,as well as parahippocampal gyrus.Finally,component 12 appeared to be unique for the neutral condition.In addition,there have been determined circuits across components,which are either common,or distinct in the preferential processing of the sub-scales of the task.CONCLUSION This study has delivers further evidence in support of the model of trans-disciplinary cross-validation in psychiatry.展开更多
Regression models for survival time data involve estimation of the hazard rate as a function of predictor variables and associated slope parameters. An adaptive approach is formulated for such hazard regression modeli...Regression models for survival time data involve estimation of the hazard rate as a function of predictor variables and associated slope parameters. An adaptive approach is formulated for such hazard regression modeling. The hazard rate is modeled using fractional polynomials, that is, linear combinations of products of power transforms of time together with other available predictors. These fractional polynomial models are restricted to generating positive-valued hazard rates and decreasing survival times. Exponentially distributed survival times are a special case. Parameters are estimated using maximum likelihood estimation allowing for right censored survival times. Models are evaluated and compared using likelihood cross-validation (LCV) scores. LCV scores and tolerance parameters are used to control an adaptive search through alternative fractional polynomial hazard rate models to identify effective models for the underlying survival time data. These methods are demonstrated using two different survival time data sets including survival times for lung cancer patients and for multiple myeloma patients. For the lung cancer data, the hazard rate depends distinctly on time. However, controlling for cell type provides a distinct improvement while the hazard rate depends only on cell type and no longer on time. Furthermore, Cox regression is unable to identify a cell type effect. For the multiple myeloma data, the hazard rate also depends distinctly on time. Moreover, consideration of hemoglobin at diagnosis provides a distinct improvement, the hazard rate still depends distinctly on time, and hemoglobin distinctly moderates the effect of time on the hazard rate. These results indicate that adaptive hazard rate modeling can provide unique insights into survival time data.展开更多
基金supported by the National Natural Science Foundation of China under Grant Nos.71973116 and 12201018the Postdoctoral Project in China under Grant No.2022M720336+2 种基金the National Natural Science Foundation of China under Grant Nos.12071457 and 11971045the Beijing Natural Science Foundation under Grant No.1222002the NQI Project under Grant No.2022YFF0609903。
文摘In recent years,Kriging model has gained wide popularity in various fields such as space geology,econometrics,and computer experiments.As a result,research on this model has proliferated.In this paper,the authors propose a model averaging estimation based on the best linear unbiased prediction of Kriging model and the leave-one-out cross-validation method,with consideration for the model uncertainty.The authors present a weight selection criterion for the model averaging estimation and provide two theoretical justifications for the proposed method.First,the estimated weight based on the proposed criterion is asymptotically optimal in achieving the lowest possible prediction risk.Second,the proposed method asymptotically assigns all weights to the correctly specified models when the candidate model set includes these models.The effectiveness of the proposed method is verified through numerical analyses.
基金supported by the National Natural Science Foundation of China Civil Aviation Joint Fund (U1833110)Research on the Dual Prevention Mechanism and Intelligent Management Technology f or Civil Aviation Safety Risks (YK23-03-05)。
文摘Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced machine learning algorithm.To assess aviation safety and identify the causes of incidents, a classification model with light gradient boosting machine (LGBM)based on the aviation safety reporting system (ASRS) has been developed. It is improved by k-fold cross-validation with hybrid sampling model (HSCV), which may boost classification performance and maintain data balance. The results show that employing the LGBM-HSCV model can significantly improve accuracy while alleviating data imbalance. Vertical comparison with other cross-validation (CV) methods and lateral comparison with different fold times comprise the comparative approach. Aside from the comparison, two further CV approaches based on the improved method in this study are discussed:one with a different sampling and folding order, and the other with more CV. According to the assessment indices with different methods, the LGBMHSCV model proposed here is effective at detecting incident causes. The improved model for imbalanced data categorization proposed may serve as a point of reference for similar data processing, and the model’s accurate identification of civil aviation incident causes can assist to improve civil aviation safety.
文摘In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues. In order to assess and compare several strategies, we will conduct a simulation study with 15 predictors and a complex correlation structure in the linear regression model. Using sample sizes of 100 and 400 and estimates of the residual variance corresponding to R2 of 0.50 and 0.71, we consider 4 scenarios with varying amount of information. We also consider two examples with 24 and 13 predictors, respectively. We will discuss the value of cross-validation, shrinkage and backward elimination (BE) with varying significance level. We will assess whether 2-step approaches using global or parameterwise shrinkage (PWSF) can improve selected models and will compare results to models derived with the LASSO procedure. Beside of MSE we will use model sparsity and further criteria for model assessment. The amount of information in the data has an influence on the selected models and the comparison of the procedures. None of the approaches was best in all scenarios. The performance of backward elimination with a suitably chosen significance level was not worse compared to the LASSO and BE models selected were much sparser, an important advantage for interpretation and transportability. Compared to global shrinkage, PWSF had better performance. Provided that the amount of information is not too small, we conclude that BE followed by PWSF is a suitable approach when variable selection is a key part of data analysis.
文摘For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold and truncation parameters are chosen by cross-validation on the everage squared error, strong consistency for the case of dyadic sample size and moment consistency for arbitrary sample size are established under some regular conditions.
基金supported in part by National Sciences Foundation of China grant ( 11672001)Jiangsu Province Science and Technology Agency grant ( BE2016785)supported in part by Postgraduate Research & Practice Innovation Program of Jiangsu Province grant ( KYCX18_0156)
文摘Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,prevention,and treatment.Generalized linear mixed models(GLMM)is an extension of linear model for categorical responses while considering the correlation among observations.Methods Magnetic resonance image(MRI)data of carotid atheroscleroticplaques were acquired from 20 patients with consent obtained and 3D thin-layer models were constructed to calculate plaque stress and strain for plaque progression prediction.Data for ten morphological and biomechanical risk factors included wall thickness(WT),lipid percent(LP),minimum cap thickness(MinCT),plaque area(PA),plaque burden(PB),lumen area(LA),maximum plaque wall stress(MPWS),maximum plaque wall strain(MPWSn),average plaque wall stress(APWS),and average plaque wall strain(APWSn)were extracted from all slices for analysis.Wall thickness increase(WTI),plaque burden increase(PBI)and plaque area increase(PAI) were chosen as three measures for plaque progression.Generalized linear mixed models(GLMM)with 5-fold cross-validation strategy were used to calculate prediction accuracy for each predictor and identify optimal predictor with the highest prediction accuracy defined as sum of sensitivity and specificity.All 201 MRI slices were randomly divided into 4 training subgroups and 1 verification subgroup.The training subgroups were used for model fitting,and the verification subgroup was used to estimate the model.All combinations(total1023)of 10 risk factors were feed to GLMM and the prediction accuracy of each predictor were selected from the point on the ROC(receiver operating characteristic)curve with the highest sum of specificity and sensitivity.Results LA was the best single predictor for PBI with the highest prediction accuracy(1.360 1),and the area under of the ROC curve(AUC)is0.654 0,followed by APWSn(1.336 3)with AUC=0.6342.The optimal predictor among all possible combinations for PBI was the combination of LA,PA,LP,WT,MPWS and MPWSn with prediction accuracy=1.414 6(AUC=0.715 8).LA was once again the best single predictor for PAI with the highest prediction accuracy(1.184 6)with AUC=0.606 4,followed by MPWSn(1. 183 2)with AUC=0.6084.The combination of PA,PB,WT,MPWS,MPWSn and APWSn gave the best prediction accuracy(1.302 5)for PAI,and the AUC value is 0.6657.PA was the best single predictor for WTI with highest prediction accuracy(1.288 7)with AUC=0.641 5,followed by WT(1.254 0),with AUC=0.6097.The combination of PA,PB,WT,LP,MinCT,MPWS and MPWS was the best predictor for WTI with prediction accuracy as 1.314 0,with AUC=0.6552.This indicated that PBI was a more predictable measure than WTI and PAI. The combinational predictors improved prediction accuracy by 9.95%,4.01%and 1.96%over the best single predictors for PAI,PBI and WTI(AUC values improved by9.78%,9.45%,and 2.14%),respectively.Conclusions The use of GLMM with 5-fold cross-validation strategy combining both morphological and biomechanical risk factors could potentially improve the accuracy of carotid plaque progression prediction.This study suggests that a linear combination of multiple predictors can provide potential improvement to existing plaque assessment schemes.
基金supported by Natural Science Foundation of Fujian Province (CN) (2020I0009, 2022J01596)Cooperation Project on University Industry-Education-Research of Fujian Provincial Science and Technology Plan (CN) (2022N5011)+1 种基金Lancang-Mekong Cooperation Special Fund (2017-2020)International Sci-Tech Cooperation and Communication Program of Fujian Agriculture and Forestry University (KXGH17014)。
文摘Bulked-segregant analysis by deep sequencing(BSA-seq) is a widely used method for mapping QTL(quantitative trait loci) due to its simplicity, speed, cost-effectiveness, and efficiency. However, the ability of BSA-seq to detect QTL is often limited by inappropriate experimental designs, as evidenced by numerous practical studies. Most BSA-seq studies have utilized small to medium-sized populations, with F2populations being the most common choice. Nevertheless, theoretical studies have shown that using a large population with an appropriate pool size can significantly enhance the power and resolution of QTL detection in BSA-seq, with F_(3)populations offering notable advantages over F2populations. To provide an experimental demonstration, we tested the power of BSA-seq to identify QTL controlling days from sowing to heading(DTH) in a 7200-plant rice F_(3)population in two environments, with a pool size of approximately 500. Each experiment identified 34 QTL, an order of magnitude greater than reported in most BSA-seq experiments, of which 23 were detected in both experiments, with 17 of these located near41 previously reported QTL and eight cloned genes known to control DTH in rice. These results indicate that QTL mapping by BSA-seq in large F_(3)populations and multi-environment experiments can achieve high power, resolution, and reliability.
基金Project(52161135301)supported by the International Cooperation and Exchange of the National Natural Science Foundation of ChinaProject(202306370296)supported by China Scholarship Council。
文摘Rockburst is a common geological disaster in underground engineering,which seriously threatens the safety of personnel,equipment and property.Utilizing machine learning models to evaluate risk of rockburst is gradually becoming a trend.In this study,the integrated algorithms under Gradient Boosting Decision Tree(GBDT)framework were used to evaluate and classify rockburst intensity.First,a total of 301 rock burst data samples were obtained from a case database,and the data were preprocessed using synthetic minority over-sampling technique(SMOTE).Then,the rockburst evaluation models including GBDT,eXtreme Gradient Boosting(XGBoost),Light Gradient Boosting Machine(LightGBM),and Categorical Features Gradient Boosting(CatBoost)were established,and the optimal hyperparameters of the models were obtained through random search grid and five-fold cross-validation.Afterwards,use the optimal hyperparameter configuration to fit the evaluation models,and analyze these models using test set.In order to evaluate the performance,metrics including accuracy,precision,recall,and F1-score were selected to analyze and compare with other machine learning models.Finally,the trained models were used to conduct rock burst risk assessment on rock samples from a mine in Shanxi Province,China,and providing theoretical guidance for the mine's safe production work.The models under the GBDT framework perform well in the evaluation of rockburst levels,and the proposed methods can provide a reliable reference for rockburst risk level analysis and safety management.
文摘In forest science and practice, the total tree height is one of the basic morphometric attributes at the tree level and it has been closely linked with important stand attributes. In the current research, sixteen nonlinear functions for height prediction were tested in terms of their fitting ability against samples of Abies borisii regis and Pinus sylvestris trees from mountainous forests in central Greece. The fitting procedure was based on generalized nonlinear weighted regression. At the final stage, a five-quantile nonlinear height-diameter model was developed for both species through a quantile regression approach, to estimate the entire conditional distribution of tree height, enabling the evaluation of the diameter impact at various quantiles and providing a comprehensive understanding of the proposed relationship across the distribution. The results clearly showed that employing the diameter as the sole independent variable, the 3-parameter Hossfeld function and the 2-parameter N?slund function managed to explain approximately 84.0% and 81.7% of the total height variance in the case of King Boris fir and Scots pine species, respectively. Furthermore, the models exhibited low levels of error in both cases(2.310m for the fir and 3.004m for the pine), yielding unbiased predictions for both fir(-0.002m) and pine(-0.004m). Notably, all the required assumptions for homogeneity and normality of the associated residuals were achieved through the weighting procedure, while the quantile regression approach provided additional insights into the height-diameter allometry of the specific species. The proposed models can turn into valuable tools for operational forest management planning, particularly for wood production and conservation of mountainous forest ecosystems.
基金The funding for thisworkwas provided by theResearch Groups Funding Program,Grant Code(NU/GP/SERC/13/30).
文摘Parkinson’s disease(PD)is a chronic neurological condition that progresses over time.People start to have trouble speaking,writing,walking,or performing other basic skills as dopamine-generating neurons in some brain regions are injured or die.The patient’s symptoms become more severe due to the worsening of their signs over time.In this study,we applied state-of-the-art machine learning algorithms to diagnose Parkinson’s disease and identify related risk factors.The research worked on the publicly available dataset on PD,and the dataset consists of a set of significant characteristics of PD.We aim to apply soft computing techniques and provide an effective solution for medical professionals to diagnose PD accurately.This research methodology involves developing a model using a machine learning algorithm.In the model selection,eight different machine learning techniques were adopted:Namely,Random Forest(RF),Decision Tree(DT),Support Vector Machine(SVM),Naïve Bayes(NB),Light Gradient Boosting Machine(LightGBM),K-Nearest Neighbours(KNN),Extreme Gradient Boosting(XGBoost),and Logistic Regression(LR).Subsequently,the concentrated models were validated through 10-fold Cross-Validation and Receiver Operating Characteristic(ROC)—Area Under the Curve(AUC).In addition,GridSearchCV was utilised to measure each algorithm’s best parameter;eventually,the models were trained through the hyperparameter tuning approach.With 98%accuracy,LightGBM had the highest accuracy in this study.RF,KNN,and SVM came in second with 96%accuracy.Furthermore,the performance scores of NB and LR were recorded to be 76%and 83%,respectively.It is to be mentioned that after applying 10-fold cross-validation,the average performance score of LightGBM accounted for 93%.At the same time,the percentage of ROC-AUC appeared at 0.92,which indicates that this LightGBM model reached a satisfactory level.Finally,we extracted meaningful insights and figured out potential gaps on top of PD.By extracting meaningful insights and identifying potential gaps,our study contributes to the significance and impact of PD research.The application of advanced machine learning algorithms holds promise in accurately diagnosing PD and shedding light on crucial aspects of the disease.This research has the potential to enhance the understanding and management of PD,ultimately improving the lives of individuals affected by this condition.
文摘Adaptive fractional polynomial modeling of general correlated outcomes is formulated to address nonlinearity in means, variances/dispersions, and correlations. Means and variances/dispersions are modeled using generalized linear models in fixed effects/coefficients. Correlations are modeled using random effects/coefficients. Nonlinearity is addressed using power transforms of primary (untransformed) predictors. Parameter estimation is based on extended linear mixed modeling generalizing both generalized estimating equations and linear mixed modeling. Models are evaluated using likelihood cross-validation (LCV) scores and are generated adaptively using a heuristic search controlled by LCV scores. Cases covered include linear, Poisson, logistic, exponential, and discrete regression of correlated continuous, count/rate, dichotomous, positive continuous, and discrete numeric outcomes treated as normally, Poisson, Bernoulli, exponentially, and discrete numerically distributed, respectively. Example analyses are also generated for these five cases to compare adaptive random effects/coefficients modeling of correlated outcomes to previously developed adaptive modeling based on directly specified covariance structures. Adaptive random effects/coefficients modeling substantially outperforms direct covariance modeling in the linear, exponential, and discrete regression example analyses. It generates equivalent results in the logistic regression example analyses and it is substantially outperformed in the Poisson regression case. Random effects/coefficients modeling of correlated outcomes can provide substantial improvements in model selection compared to directly specified covariance modeling. However, directly specified covariance modeling can generate competitive or substantially better results in some cases while usually requiring less computation time.
文摘In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.
基金supported by the US Department of Agriculture,Agriculture and Food Research Initiative National Institute of Food and Agriculture Competitive grant no.2015-67015-22947
文摘Background: A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model.Methods: Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.Results: Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations.Conclusions: Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.
文摘The use of machine learning to predict student employability is important in order to analyse a student’s capability to get a job.Based on the results of this type of analysis,university managers can improve the employability of their students,which can help in attracting students in the future.In addition,learners can focus on the essential skills identified through this analysis during their studies,to increase their employability.An effectivemethod calledOPT-BAG(OPTimisation of BAGging classifiers)was therefore developed to model the problem of predicting the employability of students.This model can help predict the employability of students based on their competencies and can reveal weaknesses that need to be improved.First,we analyse the relationships between several variables and the outcome variable using a correlation heatmap for a student employability dataset.Next,a standard scaler function is applied in the preprocessing module to normalise the variables in the student employability dataset.The training set is then input to our model to identify the optimal parameters for the bagging classifier using a grid search cross-validation technique.Finally,the OPT-BAG model,based on a bagging classifier with optimal parameters found in the previous step,is trained on the training dataset to predict student employability.The empirical outcomes in terms of accuracy,precision,recall,and F1 indicate that the OPT-BAG approach outperforms other cutting-edge machine learning models in terms of predicting student employability.In this study,we also analyse the factors affecting the recruitment process of employers,and find that general appearance,mental alertness,and communication skills are the most important.This indicates that educational institutions should focus on these factors during the learning process to improve student employability.
基金supported financially by the Ministerio de Ciencia e Innovación(Spain)and the European Regional Development Fund under the Research Grant WindSound Project(Ref.:PID2021-125278OB-I00).
文摘Maintenance operations have a critical influence on power gen-eration by wind turbines(WT).Advanced algorithms must analyze large volume of data from condition monitoring systems(CMS)to determine the actual working conditions and avoid false alarms.This paper proposes different support vector machine(SVM)algorithms for the prediction and detection of false alarms.K-Fold cross-validation(CV)is applied to evaluate the classification reliability of these algorithms.Supervisory Control and Data Acquisition(SCADA)data from an operating WT are applied to test the proposed approach.The results from the quadratic SVM showed an accuracy rate of 98.6%.Misclassifications from the confusion matrix,alarm log and maintenance records are analyzed to obtain quantitative information and determine if it is a false alarm.The classifier reduces the number of false alarms called misclassifications by 25%.These results demonstrate that the proposed approach presents high reliability and accuracy in false alarm identification.
文摘BACKGROUND Our study expand upon a large body of evidence in the field of neuropsychiatric imaging with cognitive,affective and behavioral tasks,adapted for the functional magnetic resonance imaging(MRI)(fMRI)experimental environment.There is sufficient evidence that common networks underpin activations in task-based fMRI across different mental disorders.AIM To investigate whether there exist specific neural circuits which underpin differ-ential item responses to depressive,paranoid and neutral items(DN)in patients respectively with schizophrenia(SCZ)and major depressive disorder(MDD).METHODS 60 patients were recruited with SCZ and MDD.All patients have been scanned on 3T magnetic resonance tomography platform with functional MRI paradigm,comprised of block design,including blocks with items from diagnostic paranoid(DP),depression specific(DS)and DN from general interest scale.We performed a two-sample t-test between the two groups-SCZ patients and depressive patients.Our purpose was to observe different brain networks which were activated during a specific condition of the task,respectively DS,DP,DN.RESULTS Several significant results are demonstrated in the comparison between SCZ and depressive groups while performing this task.We identified one component that is task-related and independent of condition(shared between all three conditions),composed by regions within the temporal(right superior and middle temporal gyri),frontal(left middle and inferior frontal gyri)and limbic/salience system(right anterior insula).Another com-ponent is related to both diagnostic specific conditions(DS and DP)e.g.It is shared between DEP and SCZ,and includes frontal motor/language and parietal areas.One specific component is modulated preferentially by to the DP condition,and is related mainly to prefrontal regions,whereas other two components are significantly modulated with the DS condition and include clusters within the default mode network such as posterior cingulate and precuneus,several occipital areas,including lingual and fusiform gyrus,as well as parahippocampal gyrus.Finally,component 12 appeared to be unique for the neutral condition.In addition,there have been determined circuits across components,which are either common,or distinct in the preferential processing of the sub-scales of the task.CONCLUSION This study has delivers further evidence in support of the model of trans-disciplinary cross-validation in psychiatry.
文摘Regression models for survival time data involve estimation of the hazard rate as a function of predictor variables and associated slope parameters. An adaptive approach is formulated for such hazard regression modeling. The hazard rate is modeled using fractional polynomials, that is, linear combinations of products of power transforms of time together with other available predictors. These fractional polynomial models are restricted to generating positive-valued hazard rates and decreasing survival times. Exponentially distributed survival times are a special case. Parameters are estimated using maximum likelihood estimation allowing for right censored survival times. Models are evaluated and compared using likelihood cross-validation (LCV) scores. LCV scores and tolerance parameters are used to control an adaptive search through alternative fractional polynomial hazard rate models to identify effective models for the underlying survival time data. These methods are demonstrated using two different survival time data sets including survival times for lung cancer patients and for multiple myeloma patients. For the lung cancer data, the hazard rate depends distinctly on time. However, controlling for cell type provides a distinct improvement while the hazard rate depends only on cell type and no longer on time. Furthermore, Cox regression is unable to identify a cell type effect. For the multiple myeloma data, the hazard rate also depends distinctly on time. Moreover, consideration of hemoglobin at diagnosis provides a distinct improvement, the hazard rate still depends distinctly on time, and hemoglobin distinctly moderates the effect of time on the hazard rate. These results indicate that adaptive hazard rate modeling can provide unique insights into survival time data.