Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Informati...Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Information Quantity (FIQ) approach offers a novel solution by acknowledging the inherent limitations in information processing capacity of physical systems. This framework facilitates the development of objective criteria for model selection (comparative uncertainty) and paves the way for a more comprehensive understanding of phenomena through exploring diverse explanations. This work presents a detailed comparison of the FIQ approach with ten established model selection methods, highlighting the advantages and limitations of each. We demonstrate the potential of FIQ to enhance the objectivity and robustness of scientific inquiry through three practical examples: selecting appropriate models for measuring fundamental constants, sound velocity, and underwater electrical discharges. Further research is warranted to explore the full applicability of FIQ across various scientific disciplines.展开更多
This paper briefs the configuration and performance of large size gas turbines and their composed combined cycle power plants designed and produced by four large renown gas turbine manufacturing firms in the world, pr...This paper briefs the configuration and performance of large size gas turbines and their composed combined cycle power plants designed and produced by four large renown gas turbine manufacturing firms in the world, providing reference for the relevant sectors and enterprises in importing advanced gas turbines and technologies.展开更多
It is quite common in statistical modeling to select a model and make inference as if the model had been known in advance;i.e. ignoring model selection uncertainty. The resulted estimator is called post-model selectio...It is quite common in statistical modeling to select a model and make inference as if the model had been known in advance;i.e. ignoring model selection uncertainty. The resulted estimator is called post-model selection estimator (PMSE) whose properties are hard to derive. Conditioning on data at hand (as it is usually the case), Bayesian model selection is free of this phenomenon. This paper is concerned with the properties of Bayesian estimator obtained after model selection when the frequentist (long run) performances of the resulted Bayesian estimator are of interest. The proposed method, using Bayesian decision theory, is based on the well known Bayesian model averaging (BMA)’s machinery;and outperforms PMSE and BMA. It is shown that if the unconditional model selection probability is equal to model prior, then the proposed approach reduces BMA. The method is illustrated using Bernoulli trials.展开更多
The traditional model selection criterions try to make a balance between fitted error and model complexity. Assumptions on the distribution of the response or the noise, which may be misspecified, should be made befor...The traditional model selection criterions try to make a balance between fitted error and model complexity. Assumptions on the distribution of the response or the noise, which may be misspecified, should be made before using the traditional ones. In this ar- ticle, we give a new model selection criterion, based on the assumption that noise term in the model is independent with explanatory variables, of minimizing the association strength between regression residuals and the response, with fewer assumptions. Maximal Information Coe^cient (MIC), a recently proposed dependence measure, captures a wide range of associ- ations, and gives almost the same score to different type of relationships with equal noise, so MIC is used to measure the association strength. Furthermore, partial maximal information coefficient (PMIC) is introduced to capture the association between two variables removing a third controlling random variable. In addition, the definition of general partial relationship is given.展开更多
Covariance functions have been proposed as an alternative to model longitudinal data in animal breeding because of their various merits in comparison to the classical analytical methods.In practical estimation,differe...Covariance functions have been proposed as an alternative to model longitudinal data in animal breeding because of their various merits in comparison to the classical analytical methods.In practical estimation,different models and polynomial orders fitted can influence the estimates of covariance functions and thus genetic parameters.The objective of this study was to select model for estimation of covariance functions for body weights of Angora goats at 7 time points.Covariance functions were estimated by fitting 6 random regression models with birth year,birth month,sex,age of dam,birth type,and relative birth date as fixed effects.Random effects involved were direct and maternal additive genetic,and animal and maternal permanent environmental effects with different orders of fit.Selection of model and orders of fit were carried out by likelihood ratio test and 4 types of information criteria.The results showed that model with 6 orders of polynomial fit for direct additive genetic and animal permanent environmental effects and 4 and 5 orders for maternal genetic and permanent environmental effects,respectively,were preferable for estimation of covariance functions.Models with and without maternal effects influenced the estimates of covariance functions greatly.Maternal permanent environmental effect does not explain the variation of all permanent environments,well suggesting different sources of permanent environmental effects also has large influence on covariance function estimates.展开更多
This paper proposes a new search strategy using mutative scale chaos optimization algorithm (MSCO) for model selection of support vector machine (SVM). It searches the parameter space of SVM with a very high effic...This paper proposes a new search strategy using mutative scale chaos optimization algorithm (MSCO) for model selection of support vector machine (SVM). It searches the parameter space of SVM with a very high efficiency and finds the optimum parameter setting for a practical classification problem with very low time cost. To demonstrate the performance of the proposed method it is applied to model selection of SVM in ultrasonic flaw classification and compared with grid search for model selection. Experimental results show that MSCO is a very powerful tool for model selection of SVM, and outperforms grid search in search speed and precision in ultrasonic flaw classification.展开更多
To solve the medium and long term power load forecasting problem,the combination forecasting method is further expanded and a weighted combination forecasting model for power load is put forward.This model is divided ...To solve the medium and long term power load forecasting problem,the combination forecasting method is further expanded and a weighted combination forecasting model for power load is put forward.This model is divided into two stages which are forecasting model selection and weighted combination forecasting.Based on Markov chain conversion and cloud model,the forecasting model selection is implanted and several outstanding models are selected for the combination forecasting.For the weighted combination forecasting,a fuzzy scale joint evaluation method is proposed to determine the weight of selected forecasting model.The percentage error and mean absolute percentage error of weighted combination forecasting result of the power consumption in a certain area of China are 0.7439%and 0.3198%,respectively,while the maximum values of these two indexes of single forecasting models are 5.2278%and 1.9497%.It shows that the forecasting indexes of proposed model are improved significantly compared with the single forecasting models.展开更多
Regional climate change impact assessments are becoming increasingly important for developing adaptation strategies in an uncertain future with respect to hydro-climatic extremes. There are a number of Global Climate ...Regional climate change impact assessments are becoming increasingly important for developing adaptation strategies in an uncertain future with respect to hydro-climatic extremes. There are a number of Global Climate Models (GCMs) and emission scenarios providing predictions of future changes in climate. As a result, there is a level of uncertainty associated with the decision of which climate models to use for the assessment of climate change impacts. The IPCC has recommended using as many global climate model scenarios as possible;however, this approach may be impractical for regional assessments that are computationally demanding. Methods have been developed to select climate model scenarios, generally consisting of selecting a model with the highest skill (validation), creating an ensemble, or selecting one or more extremes. Validation methods limit analyses to models with higher skill in simulating historical climate, ensemble methods typically take multi model means, median, or percentiles, and extremes methods tend to use scenarios which bound the projected changes in precipitation and temperature. In this paper a quantile regression based validation method is developed and applied to generate a reduced set of GCM-scenarios to analyze daily maximum streamflow uncertainty in the Upper Thames River Basin, Canada, while extremes and percentile ensemble approaches are also used for comparison. Results indicate that the validation method was able to effectively rank and reduce the set of scenarios, while the extremes and percentile ensemble methods were found not to necessarily correlate well with the range of extreme flows for all calendar months and return periods.展开更多
We focus on the development of model selection criteria in linear mixed models. In particular, we propose the model selection criteria following the Mallows’ Conceptual Predictive Statistic (Cp) [1] [2] in linear mix...We focus on the development of model selection criteria in linear mixed models. In particular, we propose the model selection criteria following the Mallows’ Conceptual Predictive Statistic (Cp) [1] [2] in linear mixed models. When correlation exists between the observations in data, the normal Gauss discrepancy in univariate case is not appropriate to measure the distance between the true model and a candidate model. Instead, we define a marginal Gauss discrepancy which takes the correlation into account in the mixed models. The model selection criterion, marginal Cp, called MCp, serves as an asymptotically unbiased estimator of the expected marginal Gauss discrepancy. An improvement of MCp, called IMCp, is then derived and proved to be a more accurate estimator of the expected marginal Gauss discrepancy than MCp. The performance of the proposed criteria is investigated in a simulation study. The simulation results show that in small samples, the proposed criteria outperform the Akaike Information Criteria (AIC) [3] [4] and Bayesian Information Criterion (BIC) [5] in selecting the correct model;in large samples, their performance is competitive. Further, the proposed criteria perform significantly better for highly correlated response data than for weakly correlated data.展开更多
The ongoing research for model choice and selection has generated a plethora of approaches. With such a wealth of methods, it can be difficult for a researcher to know what model selection approach is the proper w...The ongoing research for model choice and selection has generated a plethora of approaches. With such a wealth of methods, it can be difficult for a researcher to know what model selection approach is the proper way to proceed to select the appropriate model for prediction. The authors present an evaluation of various model selection criteria from decision-theoretic perspective using experimental data to define and recommend a criterion to select the best model. In this analysis, six of the most common selection criteria, nineteen friction factor correlations, and eight sets of experimental data are employed. The results show that while the use of the traditional correlation coefficient, R2 is inappropriate, root mean square error, RMSE can be used to rank models, but does not give much insight on their accuracy. Other criteria such as correlation ratio, mean absolute error, and standard deviation are also evaluated. The AIC (Akaike Information Criterion) has shown its superiority to other selection criteria. The authors propose AIC as an alternative to use when fitting experimental data or evaluating existing correlations. Indeed, the AIC method is an information theory based, theoretically sound and stable. The paper presents a detailed discussion of the model selection criteria, their pros and cons, and how they can be utilized to allow proper comparison of different models for the best model to be inferred based on sound mathematical theory. In conclusion, model selection is an interesting problem and an innovative strategy to help alleviate similar challenges faced by the professionals in the oil and gas industry is introduced.展开更多
Parkinson's disease(PD)is a neurodegenerative disorder characterized by motor and non-motor symptoms that significantly impact an individual's quality of life.Voice changes have shown promise as early indicato...Parkinson's disease(PD)is a neurodegenerative disorder characterized by motor and non-motor symptoms that significantly impact an individual's quality of life.Voice changes have shown promise as early indicators of PD,making voice analysis a valuable tool for early detection and intervention.This study aims to assess and detect the severity of PD through voice analysis using the mobile device voice recordings dataset.The dataset consisted of recordings from PD patients at different stages of the disease and healthy control subjects.A novel approach was employed,incorporating a voice activity detection algorithm for speech segmentation and the wavelet scattering transform for feature extraction.A Bayesian optimization technique is used to fine-tune the hyperparameters of seven commonly used classifiers and optimize the performance of machine learning classifiers for PD severity detection.AdaBoost and K-nearest neighbor consistently demonstrated superior performance across various evaluation metrics among the classifiers.Furthermore,a weighted majority voting(WMV)technique is implemented,leveraging the predictions of multiple models to achieve a near-perfect accuracy of 98.62%,improving classification accuracy.The results highlight the promising potential of voice analysis in PD diagnosis and monitoring.Integrating advanced signal processing techniques and machine learning models provides reliable and accessible tools for PD assessment,facilitating early intervention and improving patient outcomes.This study contributes to the field by demonstrating the effectiveness of the proposed methodology and the significant role of WMV in enhancing classification accuracy for PD severity detection.展开更多
The spatial and spatiotemporal autoregressive conditional heteroscedasticity(STARCH) models receive increasing attention. In this paper, we introduce a spatiotemporal autoregressive(STAR) model with STARCH errors, whi...The spatial and spatiotemporal autoregressive conditional heteroscedasticity(STARCH) models receive increasing attention. In this paper, we introduce a spatiotemporal autoregressive(STAR) model with STARCH errors, which can capture the spatiotemporal dependence in mean and variance simultaneously. The Bayesian estimation and model selection are considered for our model. By Monte Carlo simulations, it is shown that the Bayesian estimator performs better than the corresponding maximum-likelihood estimator, and the Bayesian model selection can select out the true model in most times. Finally, two empirical examples are given to illustrate the superiority of our models in fitting those data.展开更多
The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathemati...The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathematical simplification or empirical data fitting.However,the lack of standard model labels is a challenge in the optimal selection process.To solve this problem,a general three-level evaluation system for the model selection performance is proposed,including model selection accuracy index based on simulation data,fit goodness indexs based on the optimally selected model,and evaluation index based on the supporting performance to its third-party.The three-level evaluation system can more comprehensively and accurately describe the selection performance of the radar clutter model in different ways,and can be popularized and applied to the evaluation of other similar characterization model selection.展开更多
In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making d...In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.展开更多
Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of...Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of soybean, least absolute shrinkage and selection operator(LASSO) regression and stepwise regression were combined, and a genomic selection model was established for 40 002 SNP markers covering soybean genome and relative lesion area of soybean FLS. As a result, 68 molecular markers controlling soybean FLS were detected accurately, and the phenotypic contribution rate of these markers reached 82.45%. In this study, a model was established, which could be used directly to evaluate the resistance of soybean FLS and to select excellent offspring. This research method could also provide ideas and methods for other plants to breeding in disease resistance.展开更多
KaKs_Calculator is a software package that calculates nonsynonymous (Ka) and synonymous (Ks) substitution rates through model selection and model averaging. Since existing methods for this estimation adopt their s...KaKs_Calculator is a software package that calculates nonsynonymous (Ka) and synonymous (Ks) substitution rates through model selection and model averaging. Since existing methods for this estimation adopt their specific mutation (substitution) models that consider different evolutionary features, leading to diverse estimates, KaKs_Calculator implements a set of candidate models in a maximum likelihood framework and adopts the Akaike information criterion to measure fitness between models and data, aiming to include as many features as needed for accurately capturing evolutionary information in protein-coding sequences. In addition, several existing methods for calculating Ka and Ks are also incorporated into this software. KaKs_Calculator, including source codes, compiled executables, and documentation, is freely available for academic use at http://evolution.genomics.org.cn/software.htm.展开更多
The performance of six statistical approaches,which can be used for selection of the best model to describe the growth of individual fish,was analyzed using simulated and real length-at-age data.The six approaches inc...The performance of six statistical approaches,which can be used for selection of the best model to describe the growth of individual fish,was analyzed using simulated and real length-at-age data.The six approaches include coefficient of determination(R2),adjusted coefficient of determination(adj.-R2),root mean squared error(RMSE),Akaike's information criterion(AIC),bias correction of AIC(AICc) and Bayesian information criterion(BIC).The simulation data were generated by five growth models with different numbers of parameters.Four sets of real data were taken from the literature.The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data.The best supported model by the data was identified using each of the six approaches.The results show that R2 and RMSE have the same properties and perform worst.The sample size has an effect on the performance of adj.-R2,AIC,AICc and BIC.Adj.-R2 does better in small samples than in large samples.AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large.AICc and BIC have best performance in small and large sample cases,respectively.Use of AICc or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.展开更多
A powerful investigative tool in biology is to consider not a single mathematical model but a collection of models designed to explore different working hypotheses and select the best model in that collection.In these...A powerful investigative tool in biology is to consider not a single mathematical model but a collection of models designed to explore different working hypotheses and select the best model in that collection.In these lecture notes,the usual workflow of the use of mathematical models to investigate a biological problem is described and the use of a collection of model is motivated.Models depend on parameters that must be estimated using observations;and when a collection of models is considered,the best model has then to be identified based on available observations.Hence,model calibration and selection,which are intrinsically linked,are essential steps of the workflow.Here,some procedures for model calibration and a criterion,the Akaike Information Criterion,of model selection based on experimental data are described.Rough derivation,practical technique of computation and use of this criterion are detailed.展开更多
In this paper, we propose an information-theoretic-criterion-based modelselection procedure for log-linear model of contingency tables under multinomial sampling, andestablish the strong consistency of the method unde...In this paper, we propose an information-theoretic-criterion-based modelselection procedure for log-linear model of contingency tables under multinomial sampling, andestablish the strong consistency of the method under some mild conditions. An exponential bound ofmiss detection probability is also obtained. The selection procedure is modified so that it can beused in practice. Simulation shows that the modified method is valid. To avoid selecting the penaltycoefficient in the information criteria, an alternative selection procedure is given.展开更多
We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mix...We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mixing) responses under mild conditions. The LIL is useful to derive the asymptotic bounds for the discrepancy between the empirical process of the log-likelihood function and the true log-likelihood. The strong consistency of some penalized likelihood-based model selection criteria can be shown as an application of the LIL. Under some regularity conditions, the model selection criterion will be helpful to select the simplest correct model almost surely when the penalty term increases with the model dimension, and the penalty term has an order higher than O(log log n) but lower than O(n). Simulation studies are implemented to verify the selection consistency of Bayesian information criterion.展开更多
文摘Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Information Quantity (FIQ) approach offers a novel solution by acknowledging the inherent limitations in information processing capacity of physical systems. This framework facilitates the development of objective criteria for model selection (comparative uncertainty) and paves the way for a more comprehensive understanding of phenomena through exploring diverse explanations. This work presents a detailed comparison of the FIQ approach with ten established model selection methods, highlighting the advantages and limitations of each. We demonstrate the potential of FIQ to enhance the objectivity and robustness of scientific inquiry through three practical examples: selecting appropriate models for measuring fundamental constants, sound velocity, and underwater electrical discharges. Further research is warranted to explore the full applicability of FIQ across various scientific disciplines.
文摘This paper briefs the configuration and performance of large size gas turbines and their composed combined cycle power plants designed and produced by four large renown gas turbine manufacturing firms in the world, providing reference for the relevant sectors and enterprises in importing advanced gas turbines and technologies.
文摘It is quite common in statistical modeling to select a model and make inference as if the model had been known in advance;i.e. ignoring model selection uncertainty. The resulted estimator is called post-model selection estimator (PMSE) whose properties are hard to derive. Conditioning on data at hand (as it is usually the case), Bayesian model selection is free of this phenomenon. This paper is concerned with the properties of Bayesian estimator obtained after model selection when the frequentist (long run) performances of the resulted Bayesian estimator are of interest. The proposed method, using Bayesian decision theory, is based on the well known Bayesian model averaging (BMA)’s machinery;and outperforms PMSE and BMA. It is shown that if the unconditional model selection probability is equal to model prior, then the proposed approach reduces BMA. The method is illustrated using Bernoulli trials.
基金partly supported by National Basic Research Program of China(973 Program,2011CB707802,2013CB910200)National Science Foundation of China(11201466)
文摘The traditional model selection criterions try to make a balance between fitted error and model complexity. Assumptions on the distribution of the response or the noise, which may be misspecified, should be made before using the traditional ones. In this ar- ticle, we give a new model selection criterion, based on the assumption that noise term in the model is independent with explanatory variables, of minimizing the association strength between regression residuals and the response, with fewer assumptions. Maximal Information Coe^cient (MIC), a recently proposed dependence measure, captures a wide range of associ- ations, and gives almost the same score to different type of relationships with equal noise, so MIC is used to measure the association strength. Furthermore, partial maximal information coefficient (PMIC) is introduced to capture the association between two variables removing a third controlling random variable. In addition, the definition of general partial relationship is given.
基金funded by the Young Academic Leaders Supporting Project in Institutions of Higher Education of Shanxi Province,China
文摘Covariance functions have been proposed as an alternative to model longitudinal data in animal breeding because of their various merits in comparison to the classical analytical methods.In practical estimation,different models and polynomial orders fitted can influence the estimates of covariance functions and thus genetic parameters.The objective of this study was to select model for estimation of covariance functions for body weights of Angora goats at 7 time points.Covariance functions were estimated by fitting 6 random regression models with birth year,birth month,sex,age of dam,birth type,and relative birth date as fixed effects.Random effects involved were direct and maternal additive genetic,and animal and maternal permanent environmental effects with different orders of fit.Selection of model and orders of fit were carried out by likelihood ratio test and 4 types of information criteria.The results showed that model with 6 orders of polynomial fit for direct additive genetic and animal permanent environmental effects and 4 and 5 orders for maternal genetic and permanent environmental effects,respectively,were preferable for estimation of covariance functions.Models with and without maternal effects influenced the estimates of covariance functions greatly.Maternal permanent environmental effect does not explain the variation of all permanent environments,well suggesting different sources of permanent environmental effects also has large influence on covariance function estimates.
基金Project supported by National High-Technology Research and De-velopment Program of China (Grant No .863-2001AA602021)
文摘This paper proposes a new search strategy using mutative scale chaos optimization algorithm (MSCO) for model selection of support vector machine (SVM). It searches the parameter space of SVM with a very high efficiency and finds the optimum parameter setting for a practical classification problem with very low time cost. To demonstrate the performance of the proposed method it is applied to model selection of SVM in ultrasonic flaw classification and compared with grid search for model selection. Experimental results show that MSCO is a very powerful tool for model selection of SVM, and outperforms grid search in search speed and precision in ultrasonic flaw classification.
文摘To solve the medium and long term power load forecasting problem,the combination forecasting method is further expanded and a weighted combination forecasting model for power load is put forward.This model is divided into two stages which are forecasting model selection and weighted combination forecasting.Based on Markov chain conversion and cloud model,the forecasting model selection is implanted and several outstanding models are selected for the combination forecasting.For the weighted combination forecasting,a fuzzy scale joint evaluation method is proposed to determine the weight of selected forecasting model.The percentage error and mean absolute percentage error of weighted combination forecasting result of the power consumption in a certain area of China are 0.7439%and 0.3198%,respectively,while the maximum values of these two indexes of single forecasting models are 5.2278%and 1.9497%.It shows that the forecasting indexes of proposed model are improved significantly compared with the single forecasting models.
文摘Regional climate change impact assessments are becoming increasingly important for developing adaptation strategies in an uncertain future with respect to hydro-climatic extremes. There are a number of Global Climate Models (GCMs) and emission scenarios providing predictions of future changes in climate. As a result, there is a level of uncertainty associated with the decision of which climate models to use for the assessment of climate change impacts. The IPCC has recommended using as many global climate model scenarios as possible;however, this approach may be impractical for regional assessments that are computationally demanding. Methods have been developed to select climate model scenarios, generally consisting of selecting a model with the highest skill (validation), creating an ensemble, or selecting one or more extremes. Validation methods limit analyses to models with higher skill in simulating historical climate, ensemble methods typically take multi model means, median, or percentiles, and extremes methods tend to use scenarios which bound the projected changes in precipitation and temperature. In this paper a quantile regression based validation method is developed and applied to generate a reduced set of GCM-scenarios to analyze daily maximum streamflow uncertainty in the Upper Thames River Basin, Canada, while extremes and percentile ensemble approaches are also used for comparison. Results indicate that the validation method was able to effectively rank and reduce the set of scenarios, while the extremes and percentile ensemble methods were found not to necessarily correlate well with the range of extreme flows for all calendar months and return periods.
文摘We focus on the development of model selection criteria in linear mixed models. In particular, we propose the model selection criteria following the Mallows’ Conceptual Predictive Statistic (Cp) [1] [2] in linear mixed models. When correlation exists between the observations in data, the normal Gauss discrepancy in univariate case is not appropriate to measure the distance between the true model and a candidate model. Instead, we define a marginal Gauss discrepancy which takes the correlation into account in the mixed models. The model selection criterion, marginal Cp, called MCp, serves as an asymptotically unbiased estimator of the expected marginal Gauss discrepancy. An improvement of MCp, called IMCp, is then derived and proved to be a more accurate estimator of the expected marginal Gauss discrepancy than MCp. The performance of the proposed criteria is investigated in a simulation study. The simulation results show that in small samples, the proposed criteria outperform the Akaike Information Criteria (AIC) [3] [4] and Bayesian Information Criterion (BIC) [5] in selecting the correct model;in large samples, their performance is competitive. Further, the proposed criteria perform significantly better for highly correlated response data than for weakly correlated data.
文摘The ongoing research for model choice and selection has generated a plethora of approaches. With such a wealth of methods, it can be difficult for a researcher to know what model selection approach is the proper way to proceed to select the appropriate model for prediction. The authors present an evaluation of various model selection criteria from decision-theoretic perspective using experimental data to define and recommend a criterion to select the best model. In this analysis, six of the most common selection criteria, nineteen friction factor correlations, and eight sets of experimental data are employed. The results show that while the use of the traditional correlation coefficient, R2 is inappropriate, root mean square error, RMSE can be used to rank models, but does not give much insight on their accuracy. Other criteria such as correlation ratio, mean absolute error, and standard deviation are also evaluated. The AIC (Akaike Information Criterion) has shown its superiority to other selection criteria. The authors propose AIC as an alternative to use when fitting experimental data or evaluating existing correlations. Indeed, the AIC method is an information theory based, theoretically sound and stable. The paper presents a detailed discussion of the model selection criteria, their pros and cons, and how they can be utilized to allow proper comparison of different models for the best model to be inferred based on sound mathematical theory. In conclusion, model selection is an interesting problem and an innovative strategy to help alleviate similar challenges faced by the professionals in the oil and gas industry is introduced.
文摘Parkinson's disease(PD)is a neurodegenerative disorder characterized by motor and non-motor symptoms that significantly impact an individual's quality of life.Voice changes have shown promise as early indicators of PD,making voice analysis a valuable tool for early detection and intervention.This study aims to assess and detect the severity of PD through voice analysis using the mobile device voice recordings dataset.The dataset consisted of recordings from PD patients at different stages of the disease and healthy control subjects.A novel approach was employed,incorporating a voice activity detection algorithm for speech segmentation and the wavelet scattering transform for feature extraction.A Bayesian optimization technique is used to fine-tune the hyperparameters of seven commonly used classifiers and optimize the performance of machine learning classifiers for PD severity detection.AdaBoost and K-nearest neighbor consistently demonstrated superior performance across various evaluation metrics among the classifiers.Furthermore,a weighted majority voting(WMV)technique is implemented,leveraging the predictions of multiple models to achieve a near-perfect accuracy of 98.62%,improving classification accuracy.The results highlight the promising potential of voice analysis in PD diagnosis and monitoring.Integrating advanced signal processing techniques and machine learning models provides reliable and accessible tools for PD assessment,facilitating early intervention and improving patient outcomes.This study contributes to the field by demonstrating the effectiveness of the proposed methodology and the significant role of WMV in enhancing classification accuracy for PD severity detection.
基金supported by National Natural Science Foundation of China (No.12271206)Natural Science Foundation of Jilin Province (No.20210101143JC)Science and Technology Research Planning Project of Jilin Provincial Department of Education (No.JJKH20231122KJ)。
文摘The spatial and spatiotemporal autoregressive conditional heteroscedasticity(STARCH) models receive increasing attention. In this paper, we introduce a spatiotemporal autoregressive(STAR) model with STARCH errors, which can capture the spatiotemporal dependence in mean and variance simultaneously. The Bayesian estimation and model selection are considered for our model. By Monte Carlo simulations, it is shown that the Bayesian estimator performs better than the corresponding maximum-likelihood estimator, and the Bayesian model selection can select out the true model in most times. Finally, two empirical examples are given to illustrate the superiority of our models in fitting those data.
基金the National Natural Science Foundation of China(6187138461921001).
文摘The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathematical simplification or empirical data fitting.However,the lack of standard model labels is a challenge in the optimal selection process.To solve this problem,a general three-level evaluation system for the model selection performance is proposed,including model selection accuracy index based on simulation data,fit goodness indexs based on the optimally selected model,and evaluation index based on the supporting performance to its third-party.The three-level evaluation system can more comprehensively and accurately describe the selection performance of the radar clutter model in different ways,and can be popularized and applied to the evaluation of other similar characterization model selection.
文摘In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.
基金Supported by the National Key Research and Development Program of China(2021YFD1201103-01-05)。
文摘Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of soybean, least absolute shrinkage and selection operator(LASSO) regression and stepwise regression were combined, and a genomic selection model was established for 40 002 SNP markers covering soybean genome and relative lesion area of soybean FLS. As a result, 68 molecular markers controlling soybean FLS were detected accurately, and the phenotypic contribution rate of these markers reached 82.45%. In this study, a model was established, which could be used directly to evaluate the resistance of soybean FLS and to select excellent offspring. This research method could also provide ideas and methods for other plants to breeding in disease resistance.
基金grants from the Ministry of Science and Technology of China (No. 2001AA231061) the National Natural Science Foundation of China (No. 30270748)
文摘KaKs_Calculator is a software package that calculates nonsynonymous (Ka) and synonymous (Ks) substitution rates through model selection and model averaging. Since existing methods for this estimation adopt their specific mutation (substitution) models that consider different evolutionary features, leading to diverse estimates, KaKs_Calculator implements a set of candidate models in a maximum likelihood framework and adopts the Akaike information criterion to measure fitness between models and data, aiming to include as many features as needed for accurately capturing evolutionary information in protein-coding sequences. In addition, several existing methods for calculating Ka and Ks are also incorporated into this software. KaKs_Calculator, including source codes, compiled executables, and documentation, is freely available for academic use at http://evolution.genomics.org.cn/software.htm.
基金Supported by the High Technology Research and Development Program of China (863 Program,No2006AA100301)
文摘The performance of six statistical approaches,which can be used for selection of the best model to describe the growth of individual fish,was analyzed using simulated and real length-at-age data.The six approaches include coefficient of determination(R2),adjusted coefficient of determination(adj.-R2),root mean squared error(RMSE),Akaike's information criterion(AIC),bias correction of AIC(AICc) and Bayesian information criterion(BIC).The simulation data were generated by five growth models with different numbers of parameters.Four sets of real data were taken from the literature.The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data.The best supported model by the data was identified using each of the six approaches.The results show that R2 and RMSE have the same properties and perform worst.The sample size has an effect on the performance of adj.-R2,AIC,AICc and BIC.Adj.-R2 does better in small samples than in large samples.AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large.AICc and BIC have best performance in small and large sample cases,respectively.Use of AICc or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.
基金SP is supported by a Discovery Grant of the Natural Sciences and Engineering Research Council of Canada(RGOIN-2018-04967).
文摘A powerful investigative tool in biology is to consider not a single mathematical model but a collection of models designed to explore different working hypotheses and select the best model in that collection.In these lecture notes,the usual workflow of the use of mathematical models to investigate a biological problem is described and the use of a collection of model is motivated.Models depend on parameters that must be estimated using observations;and when a collection of models is considered,the best model has then to be identified based on available observations.Hence,model calibration and selection,which are intrinsically linked,are essential steps of the workflow.Here,some procedures for model calibration and a criterion,the Akaike Information Criterion,of model selection based on experimental data are described.Rough derivation,practical technique of computation and use of this criterion are detailed.
基金This research is partially supported by National Natural Science Foundation of China (10171094),Ph.D. Program Foundation of Ministry of Education of China and Special Foundations of the Chinese Academy of SciencesUSTC.
文摘In this paper, we propose an information-theoretic-criterion-based modelselection procedure for log-linear model of contingency tables under multinomial sampling, andestablish the strong consistency of the method under some mild conditions. An exponential bound ofmiss detection probability is also obtained. The selection procedure is modified so that it can beused in practice. Simulation shows that the modified method is valid. To avoid selecting the penaltycoefficient in the information criteria, an alternative selection procedure is given.
文摘We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent (ρ-mixing) responses under mild conditions. The LIL is useful to derive the asymptotic bounds for the discrepancy between the empirical process of the log-likelihood function and the true log-likelihood. The strong consistency of some penalized likelihood-based model selection criteria can be shown as an application of the LIL. Under some regularity conditions, the model selection criterion will be helpful to select the simplest correct model almost surely when the penalty term increases with the model dimension, and the penalty term has an order higher than O(log log n) but lower than O(n). Simulation studies are implemented to verify the selection consistency of Bayesian information criterion.