Objective: According to RFM model theory of customer relationship management, data mining technology was used to group the chronic infectious disease patients to explore the effect of customer segmentation on the mana...Objective: According to RFM model theory of customer relationship management, data mining technology was used to group the chronic infectious disease patients to explore the effect of customer segmentation on the management of patients with different characteristics. Methods: 170,246 outpatient data was extracted from the hospital management information system (HIS) during January 2016 to July 2016, 43,448 data was formed after the data cleaning. K-Means clustering algorithm was used to classify patients with chronic infectious diseases, and then C5.0 decision tree algorithm was used to predict the situation of patients with chronic infectious diseases. Results: Male patients accounted for 58.7%, patients living in Shanghai accounted for 85.6%. The average age of patients is 45.88 years old, the high incidence age is 25 to 65 years old. Patients was gathered into three categories: 1) Clusters 1—Important patients (4786 people, 11.72%, R = 2.89, F = 11.72, M = 84,302.95);2) Clustering 2—Major patients (23,103, 53.2%, R = 5.22, F = 3.45, M = 9146.39);3) Cluster 3—Potential patients (15,559 people, 35.8%, R = 19.77, F = 1.55, M = 1739.09). C5.0 decision tree algorithm was used to predict the treatment situation of patients with chronic infectious diseases, the final treatment time (weeks) is an important predictor, the accuracy rate is 99.94% verified by the confusion model. Conclusion: Medical institutions should strengthen the adherence education for patients with chronic infectious diseases, establish the chronic infectious diseases and customer relationship management database, take the initiative to help them improve treatment adherence. Chinese governments at all levels should speed up the construction of hospital information, establish the chronic infectious disease database, strengthen the blocking of mother-to-child transmission, to effectively curb chronic infectious diseases, reduce disease burden and mortality.展开更多
The term “customer churn” is used in the industry of information and communication technology (ICT) to indicate those customers who are about to leave for a new competitor, or end their subscription. Predicting this...The term “customer churn” is used in the industry of information and communication technology (ICT) to indicate those customers who are about to leave for a new competitor, or end their subscription. Predicting this behavior is very important for real life market and competition, and it is essential to manage it. In this paper, three hybrid models are investigated to develop an accurate and efficient churn prediction model. The three models are based on two phases;the clustering phase and the prediction phase. In the first phase, customer data is filtered. The second phase predicts the customer behavior. The first model investigates the k-means algorithm for data filtering, and Multilayer Perceptron Artificial Neural Networks (MLP-ANN) for prediction. The second model uses hierarchical clustering with MLP-ANN. The third one uses self organizing maps (SOM) with MLP-ANN. The three models are developed based on real data then the accuracy and churn rate values are calculated and compared. The comparison with the other models shows that the three hybrid models outperformed single common models.展开更多
The study aims to recognize how efficiently Educational DataMining(EDM)integrates into Artificial Intelligence(AI)to develop skills for predicting students’performance.The study used a survey questionnaire and collec...The study aims to recognize how efficiently Educational DataMining(EDM)integrates into Artificial Intelligence(AI)to develop skills for predicting students’performance.The study used a survey questionnaire and collected data from 300 undergraduate students of Al Neelain University.The first step’s initial population placements were created using Particle Swarm Optimization(PSO).Then,using adaptive feature space search,Educational Grey Wolf Optimization(EGWO)was employed to choose the optimal attribute combination.The second stage uses the SVMclassifier to forecast classification accuracy.Different classifiers were utilized to evaluate the performance of students.According to the results,it was revealed that AI could forecast the final grades of students with an accuracy rate of 97%on the test dataset.Furthermore,the present study showed that successful students could be selected by the Decision Tree model with an efficiency rate of 87.50%and could be categorized as having equal information ratio gain after the semester.While the random forest provided an accuracy of 28%.These findings indicate the higher accuracy rate in the results when these models were implemented on the data set which provides significantly accurate results as compared to a linear regression model with accuracy(12%).The study concluded that the methodology used in this study can prove to be helpful for students and teachers in upgrading academic performance,reducing chances of failure,and taking appropriate steps at the right time to raise the standards of education.The study also motivates academics to assess and discover EDM at several other universities.展开更多
At present, there are some resistible illegal operations aiming at creating false public opinions in internet public opinions on emergent event, which seriously disrupted the normal Internet order. However, the tradit...At present, there are some resistible illegal operations aiming at creating false public opinions in internet public opinions on emergent event, which seriously disrupted the normal Internet order. However, the traditional research method of internet public opinion pre-waming mainly relies on manual analysis, which is too inefficient to adapt to the analysis of massive internet public opinion information. According to the above analysis, this paper puts forward an internet public opinion pre-warning mechanism on emergent event based on multi-relational data clustering algorithm, discusses the specific pre-waming from the aspects of the state and dissemination of internet public opinions and the historical data, and automatically classifies the internet public opinions through multi-relational data clustering algorithm. And the results show that such method can be used to effectively study the internet public opinion pre-waming on emergent event, with the accuracy rate of as high as 95%.展开更多
Trauma is the most common cause of death to young people and many of these deaths are preventable [1]. The prediction of trauma patients outcome was a difficult problem to investigate till present times. In this study...Trauma is the most common cause of death to young people and many of these deaths are preventable [1]. The prediction of trauma patients outcome was a difficult problem to investigate till present times. In this study, prediction models are built and their capabilities to accurately predict the mortality are assessed. The analysis includes a comparison of data mining techniques using classification, clustering and association algorithms. Data were collected by Hellenic Trauma and Emergency Surgery Society from 30 Greek hospitals. Dataset contains records of 8544 patients suffering from severe injuries collected from the year 2005 to 2006. Factors include patients' demographic elements and several other variables registered from the time and place of accident until the hospital treatment and final outcome. Using this analysis the obtained results are compared in terms of sensitivity, specificity, positive predictive value and negative predictive value and the ROC curve depicts these methods performance.展开更多
In the previous publication on Volume 15 No 9, September 30, 2022 of IJCN, we analyzed “Data Mining as a Technique for Healthcare Approach”. In this edition, emphasis has been made on the “Development of Data Minin...In the previous publication on Volume 15 No 9, September 30, 2022 of IJCN, we analyzed “Data Mining as a Technique for Healthcare Approach”. In this edition, emphasis has been made on the “Development of Data Mining Model to Detect Cardiovascular Diseases (CVD)”. A Software was developed using the internationally accepted Software Engineering Methodology (SSADM), coding by OOP and packing by prototyping methodologies. Among others, this paper discusses;Cardiovascular diseases, Data Mining Algorithm, Analysis and Information flow of the Present System, Data flow and High level flow of the Proposed System, Modulating, System Design and Development, Hardware and Software Specifications, System Testing, Evaluation and Documentation.展开更多
Traditional distribution network planning relies on the professional knowledge of planners,especially when analyzing the correlations between the problems existing in the network and the crucial influencing factors.Th...Traditional distribution network planning relies on the professional knowledge of planners,especially when analyzing the correlations between the problems existing in the network and the crucial influencing factors.The inherent laws reflected by the historical data of the distribution network are ignored,which affects the objectivity of the planning scheme.In this study,to improve the efficiency and accuracy of distribution network planning,the characteristics of distribution network data were extracted using a data-mining technique,and correlation knowledge of existing problems in the network was obtained.A data-mining model based on correlation rules was established.The inputs of the model were the electrical characteristic indices screened using the gray correlation method.The Apriori algorithm was used to extract correlation knowledge from the operational data of the distribution network and obtain strong correlation rules.Degree of promotion and chi-square tests were used to verify the rationality of the strong correlation rules of the model output.In this study,the correlation relationship between heavy load or overload problems of distribution network feeders in different regions and related characteristic indices was determined,and the confidence of the correlation rules was obtained.These results can provide an effective basis for the formulation of a distribution network planning scheme.展开更多
With the progress of computer technology, data mining has become a hot research area in the computer science community. In this paper, we undertake theoretical research on the novel data mining algorithm based on fuzz...With the progress of computer technology, data mining has become a hot research area in the computer science community. In this paper, we undertake theoretical research on the novel data mining algorithm based on fuzzy clustering theory and deep neural network. The focus of data mining in seeking the visualization methods in the process of data mining, knowledge discovery process can be users to understand, to facilitate human-computer interaction in knowledge discovery process. Inspired by the brain structure layers, neural network researchers have been trying to multilayer neural network research. The experiment result shows that out algorithm is effective and robust.展开更多
Edit distance measures the similarity between two strings (as the minimum number of change, insert or delete operations that transform one string to the other). An edit sequence s is a sequence of such operations and ...Edit distance measures the similarity between two strings (as the minimum number of change, insert or delete operations that transform one string to the other). An edit sequence s is a sequence of such operations and can be used to represent the string resulting from applying s to a reference string. We present a modification to Ukkonen’s edit distance calculating algorithm based upon representing strings by edit sequences. We conclude with a demonstration of how using this representation can improve mitochondrial DNA query throughput performance in a distributed computing environment.展开更多
This paper integrates genetic algorithm and neura l network techniques to build new temporal predicting analysis tools for geographic information system (GIS). These new GIS tools can be readily applied in a practical...This paper integrates genetic algorithm and neura l network techniques to build new temporal predicting analysis tools for geographic information system (GIS). These new GIS tools can be readily applied in a practical and appropriate manner in spatial and temp oral research to patch the gaps in GIS data mining and knowledge discovery functions. The specific achievement here is the integration of related artificial intellig ent technologies into GIS software to establish a conceptual spatial and temporal analysis framework. And, by using this framework to develop an artificial intelligent spatial and tempor al information analyst (ASIA) system which then is fully utilized in the existin g GIS package. This study of air pollutants forecasting provides a geographical practical case to prove the rationalization and justness of the conceptual tempo ral analysis framework.展开更多
Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discoveri...Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discovering correlations,patterns,and causal structures within datasets.In the healthcare domain,association rules offer valuable opportunities for building knowledge bases,enabling intelligent diagnoses,and extracting invaluable information rapidly.This paper presents a novel approach called the Machine Learning based Association Rule Mining and Classification for Healthcare Data Management System(MLARMC-HDMS).The MLARMC-HDMS technique integrates classification and association rule mining(ARM)processes.Initially,the chimp optimization algorithm-based feature selection(COAFS)technique is employed within MLARMC-HDMS to select relevant attributes.Inspired by the foraging behavior of chimpanzees,the COA algorithm mimics their search strategy for food.Subsequently,the classification process utilizes stochastic gradient descent with a multilayer perceptron(SGD-MLP)model,while the Apriori algorithm determines attribute relationships.We propose a COA-based feature selection approach for medical data classification using machine learning techniques.This approach involves selecting pertinent features from medical datasets through COA and training machine learning models using the reduced feature set.We evaluate the performance of our approach on various medical datasets employing diverse machine learning classifiers.Experimental results demonstrate that our proposed approach surpasses alternative feature selection methods,achieving higher accuracy and precision rates in medical data classification tasks.The study showcases the effectiveness and efficiency of the COA-based feature selection approach in identifying relevant features,thereby enhancing the diagnosis and treatment of various diseases.To provide further validation,we conduct detailed experiments on a benchmark medical dataset,revealing the superiority of the MLARMCHDMS model over other methods,with a maximum accuracy of 99.75%.Therefore,this research contributes to the advancement of feature selection techniques in medical data classification and highlights the potential for improving healthcare outcomes through accurate and efficient data analysis.The presented MLARMC-HDMS framework and COA-based feature selection approach offer valuable insights for researchers and practitioners working in the field of healthcare data mining and machine learning.展开更多
The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an eff...The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an effective and low computational complexity privacy-preserving algorithm that can safeguard users’privacy by anonymizing big data.However,the algorithm currently suffers from the problem of focusing only on improving user privacy while ignoring data availability.In addition,ignoring the impact of quasi-identified attributes on sensitive attributes causes the usability of the processed data on statistical analysis to be reduced.Based on this,we propose a new K-anonymity algorithm to solve the privacy security problem in the context of big data,while guaranteeing improved data usability.Specifically,we construct a new information loss function based on the information quantity theory.Considering that different quasi-identification attributes have different impacts on sensitive attributes,we set weights for each quasi-identification attribute when designing the information loss function.In addition,to reduce information loss,we improve K-anonymity in two ways.First,we make the loss of information smaller than in the original table while guaranteeing privacy based on common artificial intelligence algorithms,i.e.,greedy algorithm and 2-means clustering algorithm.In addition,we improve the 2-means clustering algorithm by designing a mean-center method to select the initial center of mass.Meanwhile,we design the K-anonymity algorithm of this scheme based on the constructed information loss function,the improved 2-means clustering algorithm,and the greedy algorithm,which reduces the information loss.Finally,we experimentally demonstrate the effectiveness of the algorithm in improving the effect of 2-means clustering and reducing information loss.展开更多
Objective: To explore the clinical medication law of Professor Liu Taofeng in the treatment of eczema by using data mining technology. Methods: the cases of eczema treated by Professor Liu Taofeng in the outpatient de...Objective: To explore the clinical medication law of Professor Liu Taofeng in the treatment of eczema by using data mining technology. Methods: the cases of eczema treated by Professor Liu Taofeng in the outpatient department of the First Affiliated Hospital of Anhui University of traditional Chinese medicine from June 2018 to October 2019 were collected and sorted out. The database was established with the help of Microsoft Excel 2016, SPSS statistical 24.0 and SPSS modeler 18.0 computer software, and the frequency analysis, high-frequency drug association rule analysis and cluster analysis were carried out. Results: among the 255 prescriptions included in the study, 41 traditional Chinese medicines were involved, and the top 10 drugs were fresh white skin, cortex Scutellariae, Cortex Moutan, Tribulus terrestris, Sophora flavescens, Salvia miltiorrhiza, Atractylodes macrocephala, Poria cocos, liquorice, and Cynanchum paniculatum;10 pairs of 2 drugs were associated by association analysis, such as "Tribulus terrestris → fresh white skin, Sophora flavescens → cortex Scutellariae";and "Poria cocos, Atractylodes macrocephala, lentils" were obtained by cluster analysis. Conclusion: Professor Liu Taofeng paid more attention to the heart and liver in the treatment of eczema, taking clearing away heat, cooling blood and removing dampness as the main treatment, and paid more attention to invigorating the spleen and stomach or removing blood stasis.展开更多
An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sp...An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sparse Feature Vector', thus reduces the data scaleenormously, and can get the clustering result with only one data scan. Both theoretical analysis andempirical tests showed that CABOSFV is of low computational complexity. The algorithm findsclusters in high dimensional large datasets efficiently and handles noise effectively.展开更多
Many business applications rely on their historical data to predict their business future. The marketing products process is one of the core processes for the business. Customer needs give a useful piece of informatio...Many business applications rely on their historical data to predict their business future. The marketing products process is one of the core processes for the business. Customer needs give a useful piece of information that help</span><span style="font-family:Verdana;"><span style="font-family:Verdana;">s</span></span><span style="font-family:Verdana;"> to market the appropriate products at the appropriate time. Moreover, services are considered recently as products. The development of education and health services </span><span style="font-family:Verdana;"><span style="font-family:Verdana;">is</span></span><span style="font-family:Verdana;"> depending on historical data. For the more, reducing online social media networks problems and crimes need a significant source of information. Data analysts need to use an efficient classification algorithm to predict the future of such businesses. However, dealing with a huge quantity of data requires great time to process. Data mining involves many useful techniques that are used to predict statistical data in a variety of business applications. The classification technique is one of the most widely used with a variety of algorithms. In this paper, various classification algorithms are revised in terms of accuracy in different areas of data mining applications. A comprehensive analysis is made after delegated reading of 20 papers in the literature. This paper aims to help data analysts to choose the most suitable classification algorithm for different business applications including business in general, online social media networks, agriculture, health, and education. Results show FFBPN is the most accurate algorithm in the business domain. The Random Forest algorithm is the most accurate in classifying online social networks (OSN) activities. Na<span style="white-space:nowrap;">ï</span>ve Bayes algorithm is the most accurate to classify agriculture datasets. OneR is the most accurate algorithm to classify instances within the health domain. The C4.5 Decision Tree algorithm is the most accurate to classify students’ records to predict degree completion time.展开更多
A new method was worked out to improve the precision of springback prediction in sheet metal forming by combining the finite element method (FEM) with the data mining (DM) technique. First the genetic algorithm (GA) w...A new method was worked out to improve the precision of springback prediction in sheet metal forming by combining the finite element method (FEM) with the data mining (DM) technique. First the genetic algorithm (GA) was adopted for recognizing the material parameters. Then according to the even design idea, the suitable calculation scheme was confirmed, and FEM was used for calculating the springback. The computation results were compared with experiment data, the difference between them was taken as source data, and a new pattern recognition method of DM called hierarchical optimal map recognition method (HOMR) is applied for summarizing the calculation regulation in FEM. At the end, the mathematics model of the springback simulation was established. Based on the model, the calculation errors of springback can be controlled within 10% compared with the experimental results.展开更多
The growth of geo-technologies and the development of methods for spatial data collection have resulted in large spatial data repositories that require techniques for spatial information extraction, in order to transf...The growth of geo-technologies and the development of methods for spatial data collection have resulted in large spatial data repositories that require techniques for spatial information extraction, in order to transform raw data into useful previously unknown information. However, due to the high complexity of spatial data mining, the need for spatial relationship comprehension and its characteristics, efforts have been directed towards improving algorithms in order to provide an increase of performance and quality of results. Likewise, several issues have been addressed to spatial data mining, including environmental management, which is the focus of this paper. The main original contribution of this work is the demonstration of spatial data mining using a novel algorithm with a multi-relational approach that was applied to a database related to water resource from a certain region of S^o Paulo State, Brazil, and the discussion about obtained results. Some characteristics involving the location of water resources and the profile of who is administering the water exploration were discovered and discussed.展开更多
An intrusion detection (ID) model is proposed based on the fuzzy data mining method. A major difficulty of anomaly ID is that patterns of the normal behavior change with time. In addition, an actual intrusion with a...An intrusion detection (ID) model is proposed based on the fuzzy data mining method. A major difficulty of anomaly ID is that patterns of the normal behavior change with time. In addition, an actual intrusion with a small deviation may match normal patterns. So the intrusion behavior cannot be detected by the detection system.To solve the problem, fuzzy data mining technique is utilized to extract patterns representing the normal behavior of a network. A set of fuzzy association rules mined from the network data are shown as a model of “normal behaviors”. To detect anomalous behaviors, fuzzy association rules are generated from new audit data and the similarity with sets mined from “normal” data is computed. If the similarity values are lower than a threshold value,an alarm is given. Furthermore, genetic algorithms are used to adjust the fuzzy membership functions and to select an appropriate set of features.展开更多
Data clustering is a significant information retrieval technique in today's data intensive society. Over the last few decades a vast variety of huge number of data clustering algorithms have been designed and impleme...Data clustering is a significant information retrieval technique in today's data intensive society. Over the last few decades a vast variety of huge number of data clustering algorithms have been designed and implemented for all most all data types. The quality of results of cluster analysis mainly depends on the clustering algorithm used in the analysis. Architecture of a versatile, less user dependent, dynamic and scalable data clustering machine is presented. The machine selects for analysis, the best available data clustering algorithm on the basis of the credentials of the data and previously used domain knowledge. The domain knowledge is updated on completion of each session of data analysis.展开更多
文摘Objective: According to RFM model theory of customer relationship management, data mining technology was used to group the chronic infectious disease patients to explore the effect of customer segmentation on the management of patients with different characteristics. Methods: 170,246 outpatient data was extracted from the hospital management information system (HIS) during January 2016 to July 2016, 43,448 data was formed after the data cleaning. K-Means clustering algorithm was used to classify patients with chronic infectious diseases, and then C5.0 decision tree algorithm was used to predict the situation of patients with chronic infectious diseases. Results: Male patients accounted for 58.7%, patients living in Shanghai accounted for 85.6%. The average age of patients is 45.88 years old, the high incidence age is 25 to 65 years old. Patients was gathered into three categories: 1) Clusters 1—Important patients (4786 people, 11.72%, R = 2.89, F = 11.72, M = 84,302.95);2) Clustering 2—Major patients (23,103, 53.2%, R = 5.22, F = 3.45, M = 9146.39);3) Cluster 3—Potential patients (15,559 people, 35.8%, R = 19.77, F = 1.55, M = 1739.09). C5.0 decision tree algorithm was used to predict the treatment situation of patients with chronic infectious diseases, the final treatment time (weeks) is an important predictor, the accuracy rate is 99.94% verified by the confusion model. Conclusion: Medical institutions should strengthen the adherence education for patients with chronic infectious diseases, establish the chronic infectious diseases and customer relationship management database, take the initiative to help them improve treatment adherence. Chinese governments at all levels should speed up the construction of hospital information, establish the chronic infectious disease database, strengthen the blocking of mother-to-child transmission, to effectively curb chronic infectious diseases, reduce disease burden and mortality.
文摘The term “customer churn” is used in the industry of information and communication technology (ICT) to indicate those customers who are about to leave for a new competitor, or end their subscription. Predicting this behavior is very important for real life market and competition, and it is essential to manage it. In this paper, three hybrid models are investigated to develop an accurate and efficient churn prediction model. The three models are based on two phases;the clustering phase and the prediction phase. In the first phase, customer data is filtered. The second phase predicts the customer behavior. The first model investigates the k-means algorithm for data filtering, and Multilayer Perceptron Artificial Neural Networks (MLP-ANN) for prediction. The second model uses hierarchical clustering with MLP-ANN. The third one uses self organizing maps (SOM) with MLP-ANN. The three models are developed based on real data then the accuracy and churn rate values are calculated and compared. The comparison with the other models shows that the three hybrid models outperformed single common models.
基金supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘The study aims to recognize how efficiently Educational DataMining(EDM)integrates into Artificial Intelligence(AI)to develop skills for predicting students’performance.The study used a survey questionnaire and collected data from 300 undergraduate students of Al Neelain University.The first step’s initial population placements were created using Particle Swarm Optimization(PSO).Then,using adaptive feature space search,Educational Grey Wolf Optimization(EGWO)was employed to choose the optimal attribute combination.The second stage uses the SVMclassifier to forecast classification accuracy.Different classifiers were utilized to evaluate the performance of students.According to the results,it was revealed that AI could forecast the final grades of students with an accuracy rate of 97%on the test dataset.Furthermore,the present study showed that successful students could be selected by the Decision Tree model with an efficiency rate of 87.50%and could be categorized as having equal information ratio gain after the semester.While the random forest provided an accuracy of 28%.These findings indicate the higher accuracy rate in the results when these models were implemented on the data set which provides significantly accurate results as compared to a linear regression model with accuracy(12%).The study concluded that the methodology used in this study can prove to be helpful for students and teachers in upgrading academic performance,reducing chances of failure,and taking appropriate steps at the right time to raise the standards of education.The study also motivates academics to assess and discover EDM at several other universities.
文摘At present, there are some resistible illegal operations aiming at creating false public opinions in internet public opinions on emergent event, which seriously disrupted the normal Internet order. However, the traditional research method of internet public opinion pre-waming mainly relies on manual analysis, which is too inefficient to adapt to the analysis of massive internet public opinion information. According to the above analysis, this paper puts forward an internet public opinion pre-warning mechanism on emergent event based on multi-relational data clustering algorithm, discusses the specific pre-waming from the aspects of the state and dissemination of internet public opinions and the historical data, and automatically classifies the internet public opinions through multi-relational data clustering algorithm. And the results show that such method can be used to effectively study the internet public opinion pre-waming on emergent event, with the accuracy rate of as high as 95%.
文摘Trauma is the most common cause of death to young people and many of these deaths are preventable [1]. The prediction of trauma patients outcome was a difficult problem to investigate till present times. In this study, prediction models are built and their capabilities to accurately predict the mortality are assessed. The analysis includes a comparison of data mining techniques using classification, clustering and association algorithms. Data were collected by Hellenic Trauma and Emergency Surgery Society from 30 Greek hospitals. Dataset contains records of 8544 patients suffering from severe injuries collected from the year 2005 to 2006. Factors include patients' demographic elements and several other variables registered from the time and place of accident until the hospital treatment and final outcome. Using this analysis the obtained results are compared in terms of sensitivity, specificity, positive predictive value and negative predictive value and the ROC curve depicts these methods performance.
文摘In the previous publication on Volume 15 No 9, September 30, 2022 of IJCN, we analyzed “Data Mining as a Technique for Healthcare Approach”. In this edition, emphasis has been made on the “Development of Data Mining Model to Detect Cardiovascular Diseases (CVD)”. A Software was developed using the internationally accepted Software Engineering Methodology (SSADM), coding by OOP and packing by prototyping methodologies. Among others, this paper discusses;Cardiovascular diseases, Data Mining Algorithm, Analysis and Information flow of the Present System, Data flow and High level flow of the Proposed System, Modulating, System Design and Development, Hardware and Software Specifications, System Testing, Evaluation and Documentation.
基金supported by the Science and Technology Project of China Southern Power Grid(GZHKJXM20210043-080041KK52210002).
文摘Traditional distribution network planning relies on the professional knowledge of planners,especially when analyzing the correlations between the problems existing in the network and the crucial influencing factors.The inherent laws reflected by the historical data of the distribution network are ignored,which affects the objectivity of the planning scheme.In this study,to improve the efficiency and accuracy of distribution network planning,the characteristics of distribution network data were extracted using a data-mining technique,and correlation knowledge of existing problems in the network was obtained.A data-mining model based on correlation rules was established.The inputs of the model were the electrical characteristic indices screened using the gray correlation method.The Apriori algorithm was used to extract correlation knowledge from the operational data of the distribution network and obtain strong correlation rules.Degree of promotion and chi-square tests were used to verify the rationality of the strong correlation rules of the model output.In this study,the correlation relationship between heavy load or overload problems of distribution network feeders in different regions and related characteristic indices was determined,and the confidence of the correlation rules was obtained.These results can provide an effective basis for the formulation of a distribution network planning scheme.
文摘With the progress of computer technology, data mining has become a hot research area in the computer science community. In this paper, we undertake theoretical research on the novel data mining algorithm based on fuzzy clustering theory and deep neural network. The focus of data mining in seeking the visualization methods in the process of data mining, knowledge discovery process can be users to understand, to facilitate human-computer interaction in knowledge discovery process. Inspired by the brain structure layers, neural network researchers have been trying to multilayer neural network research. The experiment result shows that out algorithm is effective and robust.
文摘Edit distance measures the similarity between two strings (as the minimum number of change, insert or delete operations that transform one string to the other). An edit sequence s is a sequence of such operations and can be used to represent the string resulting from applying s to a reference string. We present a modification to Ukkonen’s edit distance calculating algorithm based upon representing strings by edit sequences. We conclude with a demonstration of how using this representation can improve mitochondrial DNA query throughput performance in a distributed computing environment.
文摘This paper integrates genetic algorithm and neura l network techniques to build new temporal predicting analysis tools for geographic information system (GIS). These new GIS tools can be readily applied in a practical and appropriate manner in spatial and temp oral research to patch the gaps in GIS data mining and knowledge discovery functions. The specific achievement here is the integration of related artificial intellig ent technologies into GIS software to establish a conceptual spatial and temporal analysis framework. And, by using this framework to develop an artificial intelligent spatial and tempor al information analyst (ASIA) system which then is fully utilized in the existin g GIS package. This study of air pollutants forecasting provides a geographical practical case to prove the rationalization and justness of the conceptual tempo ral analysis framework.
基金Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number RI-44-0444.
文摘Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discovering correlations,patterns,and causal structures within datasets.In the healthcare domain,association rules offer valuable opportunities for building knowledge bases,enabling intelligent diagnoses,and extracting invaluable information rapidly.This paper presents a novel approach called the Machine Learning based Association Rule Mining and Classification for Healthcare Data Management System(MLARMC-HDMS).The MLARMC-HDMS technique integrates classification and association rule mining(ARM)processes.Initially,the chimp optimization algorithm-based feature selection(COAFS)technique is employed within MLARMC-HDMS to select relevant attributes.Inspired by the foraging behavior of chimpanzees,the COA algorithm mimics their search strategy for food.Subsequently,the classification process utilizes stochastic gradient descent with a multilayer perceptron(SGD-MLP)model,while the Apriori algorithm determines attribute relationships.We propose a COA-based feature selection approach for medical data classification using machine learning techniques.This approach involves selecting pertinent features from medical datasets through COA and training machine learning models using the reduced feature set.We evaluate the performance of our approach on various medical datasets employing diverse machine learning classifiers.Experimental results demonstrate that our proposed approach surpasses alternative feature selection methods,achieving higher accuracy and precision rates in medical data classification tasks.The study showcases the effectiveness and efficiency of the COA-based feature selection approach in identifying relevant features,thereby enhancing the diagnosis and treatment of various diseases.To provide further validation,we conduct detailed experiments on a benchmark medical dataset,revealing the superiority of the MLARMCHDMS model over other methods,with a maximum accuracy of 99.75%.Therefore,this research contributes to the advancement of feature selection techniques in medical data classification and highlights the potential for improving healthcare outcomes through accurate and efficient data analysis.The presented MLARMC-HDMS framework and COA-based feature selection approach offer valuable insights for researchers and practitioners working in the field of healthcare data mining and machine learning.
基金Foundation of National Natural Science Foundation of China(62202118)Scientific and Technological Research Projects from Guizhou Education Department([2023]003)+1 种基金Guizhou Provincial Department of Science and Technology Hundred Levels of Innovative Talents Project(GCC[2023]018)Top Technology Talent Project from Guizhou Education Department([2022]073).
文摘The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an effective and low computational complexity privacy-preserving algorithm that can safeguard users’privacy by anonymizing big data.However,the algorithm currently suffers from the problem of focusing only on improving user privacy while ignoring data availability.In addition,ignoring the impact of quasi-identified attributes on sensitive attributes causes the usability of the processed data on statistical analysis to be reduced.Based on this,we propose a new K-anonymity algorithm to solve the privacy security problem in the context of big data,while guaranteeing improved data usability.Specifically,we construct a new information loss function based on the information quantity theory.Considering that different quasi-identification attributes have different impacts on sensitive attributes,we set weights for each quasi-identification attribute when designing the information loss function.In addition,to reduce information loss,we improve K-anonymity in two ways.First,we make the loss of information smaller than in the original table while guaranteeing privacy based on common artificial intelligence algorithms,i.e.,greedy algorithm and 2-means clustering algorithm.In addition,we improve the 2-means clustering algorithm by designing a mean-center method to select the initial center of mass.Meanwhile,we design the K-anonymity algorithm of this scheme based on the constructed information loss function,the improved 2-means clustering algorithm,and the greedy algorithm,which reduces the information loss.Finally,we experimentally demonstrate the effectiveness of the algorithm in improving the effect of 2-means clustering and reducing information loss.
基金National key research and development plan"Research on modernization of traditional Chinese medicine"(No.2018yfc1705302).
文摘Objective: To explore the clinical medication law of Professor Liu Taofeng in the treatment of eczema by using data mining technology. Methods: the cases of eczema treated by Professor Liu Taofeng in the outpatient department of the First Affiliated Hospital of Anhui University of traditional Chinese medicine from June 2018 to October 2019 were collected and sorted out. The database was established with the help of Microsoft Excel 2016, SPSS statistical 24.0 and SPSS modeler 18.0 computer software, and the frequency analysis, high-frequency drug association rule analysis and cluster analysis were carried out. Results: among the 255 prescriptions included in the study, 41 traditional Chinese medicines were involved, and the top 10 drugs were fresh white skin, cortex Scutellariae, Cortex Moutan, Tribulus terrestris, Sophora flavescens, Salvia miltiorrhiza, Atractylodes macrocephala, Poria cocos, liquorice, and Cynanchum paniculatum;10 pairs of 2 drugs were associated by association analysis, such as "Tribulus terrestris → fresh white skin, Sophora flavescens → cortex Scutellariae";and "Poria cocos, Atractylodes macrocephala, lentils" were obtained by cluster analysis. Conclusion: Professor Liu Taofeng paid more attention to the heart and liver in the treatment of eczema, taking clearing away heat, cooling blood and removing dampness as the main treatment, and paid more attention to invigorating the spleen and stomach or removing blood stasis.
文摘An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sparse Feature Vector', thus reduces the data scaleenormously, and can get the clustering result with only one data scan. Both theoretical analysis andempirical tests showed that CABOSFV is of low computational complexity. The algorithm findsclusters in high dimensional large datasets efficiently and handles noise effectively.
文摘Many business applications rely on their historical data to predict their business future. The marketing products process is one of the core processes for the business. Customer needs give a useful piece of information that help</span><span style="font-family:Verdana;"><span style="font-family:Verdana;">s</span></span><span style="font-family:Verdana;"> to market the appropriate products at the appropriate time. Moreover, services are considered recently as products. The development of education and health services </span><span style="font-family:Verdana;"><span style="font-family:Verdana;">is</span></span><span style="font-family:Verdana;"> depending on historical data. For the more, reducing online social media networks problems and crimes need a significant source of information. Data analysts need to use an efficient classification algorithm to predict the future of such businesses. However, dealing with a huge quantity of data requires great time to process. Data mining involves many useful techniques that are used to predict statistical data in a variety of business applications. The classification technique is one of the most widely used with a variety of algorithms. In this paper, various classification algorithms are revised in terms of accuracy in different areas of data mining applications. A comprehensive analysis is made after delegated reading of 20 papers in the literature. This paper aims to help data analysts to choose the most suitable classification algorithm for different business applications including business in general, online social media networks, agriculture, health, and education. Results show FFBPN is the most accurate algorithm in the business domain. The Random Forest algorithm is the most accurate in classifying online social networks (OSN) activities. Na<span style="white-space:nowrap;">ï</span>ve Bayes algorithm is the most accurate to classify agriculture datasets. OneR is the most accurate algorithm to classify instances within the health domain. The C4.5 Decision Tree algorithm is the most accurate to classify students’ records to predict degree completion time.
文摘A new method was worked out to improve the precision of springback prediction in sheet metal forming by combining the finite element method (FEM) with the data mining (DM) technique. First the genetic algorithm (GA) was adopted for recognizing the material parameters. Then according to the even design idea, the suitable calculation scheme was confirmed, and FEM was used for calculating the springback. The computation results were compared with experiment data, the difference between them was taken as source data, and a new pattern recognition method of DM called hierarchical optimal map recognition method (HOMR) is applied for summarizing the calculation regulation in FEM. At the end, the mathematics model of the springback simulation was established. Based on the model, the calculation errors of springback can be controlled within 10% compared with the experimental results.
文摘The growth of geo-technologies and the development of methods for spatial data collection have resulted in large spatial data repositories that require techniques for spatial information extraction, in order to transform raw data into useful previously unknown information. However, due to the high complexity of spatial data mining, the need for spatial relationship comprehension and its characteristics, efforts have been directed towards improving algorithms in order to provide an increase of performance and quality of results. Likewise, several issues have been addressed to spatial data mining, including environmental management, which is the focus of this paper. The main original contribution of this work is the demonstration of spatial data mining using a novel algorithm with a multi-relational approach that was applied to a database related to water resource from a certain region of S^o Paulo State, Brazil, and the discussion about obtained results. Some characteristics involving the location of water resources and the profile of who is administering the water exploration were discovered and discussed.
文摘An intrusion detection (ID) model is proposed based on the fuzzy data mining method. A major difficulty of anomaly ID is that patterns of the normal behavior change with time. In addition, an actual intrusion with a small deviation may match normal patterns. So the intrusion behavior cannot be detected by the detection system.To solve the problem, fuzzy data mining technique is utilized to extract patterns representing the normal behavior of a network. A set of fuzzy association rules mined from the network data are shown as a model of “normal behaviors”. To detect anomalous behaviors, fuzzy association rules are generated from new audit data and the similarity with sets mined from “normal” data is computed. If the similarity values are lower than a threshold value,an alarm is given. Furthermore, genetic algorithms are used to adjust the fuzzy membership functions and to select an appropriate set of features.
文摘Data clustering is a significant information retrieval technique in today's data intensive society. Over the last few decades a vast variety of huge number of data clustering algorithms have been designed and implemented for all most all data types. The quality of results of cluster analysis mainly depends on the clustering algorithm used in the analysis. Architecture of a versatile, less user dependent, dynamic and scalable data clustering machine is presented. The machine selects for analysis, the best available data clustering algorithm on the basis of the credentials of the data and previously used domain knowledge. The domain knowledge is updated on completion of each session of data analysis.