In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A n...In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends.展开更多
Workers’exposure to excessive noise is a big universal work-related challenges.One of the major consequences of exposure to noise is permanent or transient hearing loss.The current study sought to utilize audiometric...Workers’exposure to excessive noise is a big universal work-related challenges.One of the major consequences of exposure to noise is permanent or transient hearing loss.The current study sought to utilize audiometric data to weigh and prioritize the factors affecting workers’hearing loss based using the Support Vector Machine(SVM)algorithm.This cross sectional-descriptive study was conducted in 2017 in a mining industry in southeast Iran.The participating workers(n=150)were divided into three groups of 50 based on the sound pressure level to which they were exposed(two experimental groups and one control group).Audiometric tests were carried out for all members of each group.The study generally entailed the following steps:(1)selecting predicting variables to weigh and prioritize factors affecting hearing loss;(2)conducting audiometric tests and assessing permanent hearing loss in each ear and then evaluating total hearing loss;(3)categorizing different types of hearing loss;(4)weighing and prioritizing factors that affect hearing loss based on the SVM algorithm;and(5)assessing the error rate and accuracy of the models.The collected data were fed into SPSS 18,followed by conducting linear regression and paired samples t-test.It was revealed that,in the first model(SPL<70 dBA),the frequency of 8 KHz had the greatest impact(with a weight of 33%),while noise had the smallest influence(with a weight of 5%).The accuracy of this model was 100%.In the second model(70<SPL<80 dBA),the frequency of 4 KHz had the most profound effect(with a weight of 21%),whereas the frequency of 250 Hz had the lowest impact(with a weight of 6%).The accuracy of this model was 100%too.In the third model(SPL>85 dBA),the frequency of 4 KHz had the highest impact(with a weight of 22%),while the frequency of 250 Hz had the smallest influence(with a weight of 3%).The accuracy of this model was 100%too.In the fourth model,the frequency of 4 KHz had the greatest effect(with a weight of 24%),while the frequency of 500 Hz had the smallest effect(with a weight of 4%).The accuracy of this model was found to be 94%.According to the modeling conducted using the SVM algorithm,the frequency of 4 KHz has the most profound effect on predicting changes in hearing loss.Given the high accuracy of the obtained model,this algorithm is an appropriate and powerful tool to predict and model hearing loss.展开更多
The Apriori algorithm is a classical method of association rules mining.Based on analysis of this theory,the paper provides an improved Apriori algorithm.The paper puts foward with algorithm combines HASH table techni...The Apriori algorithm is a classical method of association rules mining.Based on analysis of this theory,the paper provides an improved Apriori algorithm.The paper puts foward with algorithm combines HASH table technique and reduction of candidate item sets to enhance the usage efficiency of resources as well as the individualized service of the data library.展开更多
Detecting naturally arising structures in data is central to knowledge extraction from data. In most applications, the main challenge is in the choice of the appropriate model for exploring the data features. The choi...Detecting naturally arising structures in data is central to knowledge extraction from data. In most applications, the main challenge is in the choice of the appropriate model for exploring the data features. The choice is generally poorly understood and any tentative choice may be too restrictive. Growing volumes of data, disparate data sources and modelling techniques entail the need for model optimization via adaptability rather than comparability. We propose a novel two-stage algorithm to modelling continuous data consisting of an unsupervised stage whereby the algorithm searches through the data for optimal parameter values and a supervised stage that adapts the parameters for predictive modelling. The method is implemented on the sunspots data with inherently Gaussian distributional properties and assumed bi-modality. Optimal values separating high from lows cycles are obtained via multiple simulations. Early patterns for each recorded cycle reveal that the first 3 years provide a sufficient basis for predicting the peak. Multiple Support Vector Machine runs using repeatedly improved data parameters show that the approach yields greater accuracy and reliability than conventional approaches and provides a good basis for model selection. Model reliability is established via multiple simulations of this type.展开更多
Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest l...Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest learning(IBK),and locally weighted learning(LWL),coupled with resampling algorithms of bagging(BA)and dagging(DA)(BA-IBK,BA-KStar,BA-LWL,DA-IBK,DA-KStar,and DA-LWL)were developed and tested for multi-step ahead(3,6,and 9 d ahead)ST forecasting.In addition,a linear regression(LR)model was used as a benchmark to evaluate the results.A dataset was established,with daily ST time-series at 5 and 50 cm soil depths in a farmland as models’output and meteorological data as models’input,including mean(T_(mean)),minimum(Tmin),and maximum(T_(max))air temperatures,evaporation(Eva),sunshine hours(SSH),and solar radiation(SR),which were collected at Isfahan Synoptic Station(Iran)for 13 years(1992–2005).Six different input combination scenarios were selected based on Pearson’s correlation coefficients between inputs and outputs and fed into the models.We used 70%of the data to train the models,with the remaining 30%used for model evaluation via multiple visual and quantitative metrics.Our?ndings showed that T_(mean)was the most effective input variable for ST forecasting in most of the developed models,while in some cases the combinations of variables,including T_(mean)and T_(max)and T_(mean),T_(max),Tmin,Eva,and SSH proved to be the best input combinations.Among the evaluated models,BA-KStar showed greater compatibility,while in most cases,BA-IBK and-LWL provided more accurate results,depending on soil depth.For the 5 cm soil depth,BA-KStar had superior performance(i.e.,Nash-Sutcliffe efficiency(NSE)=0.90,0.87,and 0.85 for 3,6,and 9 d ahead forecasting,respectively);for the 50 cm soil depth,DA-KStar outperformed the other models(i.e.,NSE=0.88,0.89,and 0.89 for 3,6,and 9 d ahead forecasting,respectively).The results con?rmed that all hybrid models had higher prediction capabilities than the LR model.展开更多
In the field of data mining and machine learning,clustering is a typical issue which has been widely studied by many researchers,and lots of effective algorithms have been proposed,including K-means,fuzzy c-means(FCM)...In the field of data mining and machine learning,clustering is a typical issue which has been widely studied by many researchers,and lots of effective algorithms have been proposed,including K-means,fuzzy c-means(FCM)and DBSCAN.However,the traditional clustering methods are easily trapped into local optimum.Thus,many evolutionary-based clustering methods have been investigated.Considering the effectiveness of brain storm optimization(BSO)in increasing the diversity while the diversity optimization is performed,in this paper,we propose a new clustering model based on BSO to use the global ability of BSO.In our experiment,we apply the novel binary model to solve the problem.During the period of processing data,BSO was mainly utilized for iteration.Also,in the process of K-means,we set the more appropriate parameters selected to match it greatly.Four datasets were used in our experiment.In our model,BSO was first introduced in solving the clustering problem.With the algorithm running on each dataset repeatedly,our experimental results have obtained good convergence and diversity.In addition,by comparing the results with other clustering models,the BSO clustering model also guarantees high accuracy.Therefore,from many aspects,the simulation results show that the model of this paper has good performance.展开更多
文摘In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends.
基金This study stemmed from a research project(code number:96000838)which was sponsored by the Institute for Futures Studies in Health at Kerman University of Medical Sciences.
文摘Workers’exposure to excessive noise is a big universal work-related challenges.One of the major consequences of exposure to noise is permanent or transient hearing loss.The current study sought to utilize audiometric data to weigh and prioritize the factors affecting workers’hearing loss based using the Support Vector Machine(SVM)algorithm.This cross sectional-descriptive study was conducted in 2017 in a mining industry in southeast Iran.The participating workers(n=150)were divided into three groups of 50 based on the sound pressure level to which they were exposed(two experimental groups and one control group).Audiometric tests were carried out for all members of each group.The study generally entailed the following steps:(1)selecting predicting variables to weigh and prioritize factors affecting hearing loss;(2)conducting audiometric tests and assessing permanent hearing loss in each ear and then evaluating total hearing loss;(3)categorizing different types of hearing loss;(4)weighing and prioritizing factors that affect hearing loss based on the SVM algorithm;and(5)assessing the error rate and accuracy of the models.The collected data were fed into SPSS 18,followed by conducting linear regression and paired samples t-test.It was revealed that,in the first model(SPL<70 dBA),the frequency of 8 KHz had the greatest impact(with a weight of 33%),while noise had the smallest influence(with a weight of 5%).The accuracy of this model was 100%.In the second model(70<SPL<80 dBA),the frequency of 4 KHz had the most profound effect(with a weight of 21%),whereas the frequency of 250 Hz had the lowest impact(with a weight of 6%).The accuracy of this model was 100%too.In the third model(SPL>85 dBA),the frequency of 4 KHz had the highest impact(with a weight of 22%),while the frequency of 250 Hz had the smallest influence(with a weight of 3%).The accuracy of this model was 100%too.In the fourth model,the frequency of 4 KHz had the greatest effect(with a weight of 24%),while the frequency of 500 Hz had the smallest effect(with a weight of 4%).The accuracy of this model was found to be 94%.According to the modeling conducted using the SVM algorithm,the frequency of 4 KHz has the most profound effect on predicting changes in hearing loss.Given the high accuracy of the obtained model,this algorithm is an appropriate and powerful tool to predict and model hearing loss.
文摘The Apriori algorithm is a classical method of association rules mining.Based on analysis of this theory,the paper provides an improved Apriori algorithm.The paper puts foward with algorithm combines HASH table technique and reduction of candidate item sets to enhance the usage efficiency of resources as well as the individualized service of the data library.
文摘Detecting naturally arising structures in data is central to knowledge extraction from data. In most applications, the main challenge is in the choice of the appropriate model for exploring the data features. The choice is generally poorly understood and any tentative choice may be too restrictive. Growing volumes of data, disparate data sources and modelling techniques entail the need for model optimization via adaptability rather than comparability. We propose a novel two-stage algorithm to modelling continuous data consisting of an unsupervised stage whereby the algorithm searches through the data for optimal parameter values and a supervised stage that adapts the parameters for predictive modelling. The method is implemented on the sunspots data with inherently Gaussian distributional properties and assumed bi-modality. Optimal values separating high from lows cycles are obtained via multiple simulations. Early patterns for each recorded cycle reveal that the first 3 years provide a sufficient basis for predicting the peak. Multiple Support Vector Machine runs using repeatedly improved data parameters show that the approach yields greater accuracy and reliability than conventional approaches and provides a good basis for model selection. Model reliability is established via multiple simulations of this type.
文摘Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest learning(IBK),and locally weighted learning(LWL),coupled with resampling algorithms of bagging(BA)and dagging(DA)(BA-IBK,BA-KStar,BA-LWL,DA-IBK,DA-KStar,and DA-LWL)were developed and tested for multi-step ahead(3,6,and 9 d ahead)ST forecasting.In addition,a linear regression(LR)model was used as a benchmark to evaluate the results.A dataset was established,with daily ST time-series at 5 and 50 cm soil depths in a farmland as models’output and meteorological data as models’input,including mean(T_(mean)),minimum(Tmin),and maximum(T_(max))air temperatures,evaporation(Eva),sunshine hours(SSH),and solar radiation(SR),which were collected at Isfahan Synoptic Station(Iran)for 13 years(1992–2005).Six different input combination scenarios were selected based on Pearson’s correlation coefficients between inputs and outputs and fed into the models.We used 70%of the data to train the models,with the remaining 30%used for model evaluation via multiple visual and quantitative metrics.Our?ndings showed that T_(mean)was the most effective input variable for ST forecasting in most of the developed models,while in some cases the combinations of variables,including T_(mean)and T_(max)and T_(mean),T_(max),Tmin,Eva,and SSH proved to be the best input combinations.Among the evaluated models,BA-KStar showed greater compatibility,while in most cases,BA-IBK and-LWL provided more accurate results,depending on soil depth.For the 5 cm soil depth,BA-KStar had superior performance(i.e.,Nash-Sutcliffe efficiency(NSE)=0.90,0.87,and 0.85 for 3,6,and 9 d ahead forecasting,respectively);for the 50 cm soil depth,DA-KStar outperformed the other models(i.e.,NSE=0.88,0.89,and 0.89 for 3,6,and 9 d ahead forecasting,respectively).The results con?rmed that all hybrid models had higher prediction capabilities than the LR model.
基金supported by Natural Science Foundation of Jiangsu Province(Grant No.BK20141005)by Natural Science Foundation of the Jiangsu Higher Education Institutions of China(Grant No.14KJB520025).
文摘In the field of data mining and machine learning,clustering is a typical issue which has been widely studied by many researchers,and lots of effective algorithms have been proposed,including K-means,fuzzy c-means(FCM)and DBSCAN.However,the traditional clustering methods are easily trapped into local optimum.Thus,many evolutionary-based clustering methods have been investigated.Considering the effectiveness of brain storm optimization(BSO)in increasing the diversity while the diversity optimization is performed,in this paper,we propose a new clustering model based on BSO to use the global ability of BSO.In our experiment,we apply the novel binary model to solve the problem.During the period of processing data,BSO was mainly utilized for iteration.Also,in the process of K-means,we set the more appropriate parameters selected to match it greatly.Four datasets were used in our experiment.In our model,BSO was first introduced in solving the clustering problem.With the algorithm running on each dataset repeatedly,our experimental results have obtained good convergence and diversity.In addition,by comparing the results with other clustering models,the BSO clustering model also guarantees high accuracy.Therefore,from many aspects,the simulation results show that the model of this paper has good performance.