Suppressed fuzzy c-means (S-FCM) clustering algorithm with the intention of combining the higher speed of hard c-means clustering algorithm and the better classification performance of fuzzy c-means clustering algorit...Suppressed fuzzy c-means (S-FCM) clustering algorithm with the intention of combining the higher speed of hard c-means clustering algorithm and the better classification performance of fuzzy c-means clustering algorithm had been studied by many researchers and applied in many fields. In the algorithm, how to select the suppressed rate is a key step. In this paper, we give a method to select the fixed suppressed rate by the structure of the data itself. The experimental results show that the proposed method is a suitable way to select the suppressed rate in suppressed fuzzy c-means clustering algorithm.展开更多
Diabetic Retinopathy(DR)is a vision disease due to the long-term prevalenceof Diabetes Mellitus.It affects the retina of the eye and causes severedamage to the vision.If not treated on time it may lead to permanent vi...Diabetic Retinopathy(DR)is a vision disease due to the long-term prevalenceof Diabetes Mellitus.It affects the retina of the eye and causes severedamage to the vision.If not treated on time it may lead to permanent vision lossin diabetic patients.Today’s development in science has no medication to cureDiabetic Retinopathy.However,if diagnosed at an early stage it can be controlledand permanent vision loss can be avoided.Compared to the diabetic population,experts to diagnose Diabetic Retinopathy are very less in particular to local areas.Hence an automatic computer-aided diagnosis for DR detection is necessary.Inthis paper,we propose an unsupervised clustering technique to automatically clusterthe DR into one of its five development stages.The deep learning based unsupervisedclustering is made to improve itself with the help of fuzzy rough c-meansclustering where cluster centers are updated by fuzzy rough c-means clusteringalgorithm during the forward pass and the deep learning model representationsare updated by Stochastic Gradient Descent during the backward pass of training.The proposed method was implemented using python and the results were takenon DGX server with Tesla V100 GPU cards.An experimental result on the publicallyavailable Kaggle dataset shows an overall accuracy of 88.7%.The proposedmodel improves the accuracy of DR diagnosis compared to the existingunsupervised algorithms like k-means,FCM,auto-encoder,and FRCM withalexnet.展开更多
The aim of this paper is to present a distributed algorithm for big data classification, and its application for Magnetic Resonance Images (MRI) segmentation. We choose the well-known classification method which is th...The aim of this paper is to present a distributed algorithm for big data classification, and its application for Magnetic Resonance Images (MRI) segmentation. We choose the well-known classification method which is the c-means method. The proposed method is introduced in order to perform a cognitive program which is assigned to be implemented on a parallel and distributed machine based on mobile agents. The main idea of the proposed algorithm is to execute the c-means classification procedure by the Mobile Classification Agents (Team Workers) on different nodes on their data at the same time and provide the results to their Mobile Host Agent (Team Leader) which computes the global results and orchestrates the classification until the convergence condition is achieved and the output segmented images will be provided from the Mobile Classification Agents. The data in our case are the big data MRI image of size (m × n) which is splitted into (m × n) elementary images one per mobile classification agent to perform the classification procedure. The experimental results show that the use of the distributed architecture improves significantly the big data segmentation efficiency.展开更多
For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm u...For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm using discernment as the heuristic information was proposed.The reduction algorithm comprehensively considers the dependency degree and neighborhood granulation degree of attributes,allowing for a more accurate measurement of the importance degrees of attributes.Example analyses and experimental results demonstrate the feasibility and effectiveness of the algorithm.展开更多
In the last decade, the MRI (Magnetic Resonance Imaging) image segmentation has become one of the most active research fields in the medical imaging domain. Because of the fuzzy nature of the MRI images, many research...In the last decade, the MRI (Magnetic Resonance Imaging) image segmentation has become one of the most active research fields in the medical imaging domain. Because of the fuzzy nature of the MRI images, many researchers have adopted the fuzzy clustering approach to segment them. In this work, a fast and robust multi-agent system (MAS) for MRI segmentation of the brain is proposed. This system gets its robustness from a robust c-means algorithm (RFCM) and obtains its fastness from the beneficial properties of agents, such as autonomy, social ability and reactivity. To show the efficiency of the proposed method, we test it on a normal brain brought from the BrainWeb Simulated Brain Database. The experimental results are valuable in both robustness to noise and running times standpoints.展开更多
Classifying the data into a meaningful group is one of the fundamental ways of understanding and learning the valuable information. High-quality clustering methods are necessary for the valuable and efficient analysis...Classifying the data into a meaningful group is one of the fundamental ways of understanding and learning the valuable information. High-quality clustering methods are necessary for the valuable and efficient analysis of the increasing data. The Firefly Algorithm (FA) is one of the bio-inspired algorithms and it is recently used to solve the clustering problems. In this paper, Hybrid F-Firefly algorithm is developed by combining the Fuzzy C-Means (FCM) with FA to improve the clustering accuracy with global optimum solution. The Hybrid F-Firefly algorithm is developed by incorporating FCM operator at the end of each iteration in FA algorithm. This proposed algorithm is designed to utilize the goodness of existing algorithm and to enhance the original FA algorithm by solving the shortcomings in the FCM algorithm like the trapping in local optima and sensitive to initial seed points. In this research work, the Hybrid F-Firefly algorithm is implemented and experimentally tested for various performance measures under six different benchmark datasets. From the experimental results, it is observed that the Hybrid F-Firefly algorithm significantly improves the intra-cluster distance when compared with the existing algorithms like K-means, FCM and FA algorithm.展开更多
Knowledge reduction is an important issue when dealing with huge amounts of data. And it has been proved that computing the minimal reduct of decision system is NP-complete. By introducing heuristic information into g...Knowledge reduction is an important issue when dealing with huge amounts of data. And it has been proved that computing the minimal reduct of decision system is NP-complete. By introducing heuristic information into genetic algorithm, we proposed a heuristic genetic algorithm. In the genetic algorithm, we constructed a new operator to maintaining the classification ability. The experiment shows that our algorithm is efficient and effective for minimal reduct, even for the special example that the simple heuristic algorithm can’t get the right result.展开更多
In rough communication, because each agent has a different language and cannot provide precise communication to each other, the concept translated among multi-agents will loss some information and this results in a le...In rough communication, because each agent has a different language and cannot provide precise communication to each other, the concept translated among multi-agents will loss some information and this results in a less or rougher concept. With different translation sequences, the problem of information loss is varied. To get the translation sequence, in which the jth agent taking part in rough communication gets maximum information, a simulated annealing algorithm is used. Analysis and simulation of this algorithm demonstrate its effectiveness.展开更多
Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough se...Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough set theory has some particular characteristics. Consistency must be satisfied and cuts for discretization is expected to be as small as possible. Consistent and minimal discretization problem is NP-complete. In this paper, an immune algorithm for the problem is proposed. The correctness and effectiveness were shown in experiments. The discretization method presented in this paper can also be used as a data pre- treating step for other symbolic knowledge discovery or machine learning methods other than rough set theory.展开更多
Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features ma...Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. The conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. In this paper, we propose a new unsupervised quick reduct (QR) algorithm using rough set theory. The quality of the reduced data is measured by the classification performance and it is evaluated using WEKA classifier tool. The method is compared with existing supervised methods and the result demonstrates the efficiency of the proposed algorithm.展开更多
A new method based on rough set theory and genetic algorithm was proposedto predict the rock burst proneness. Nine influencing factors were first selected, and then,the decision table was set up. Attributes were reduc...A new method based on rough set theory and genetic algorithm was proposedto predict the rock burst proneness. Nine influencing factors were first selected, and then,the decision table was set up. Attributes were reduced by genetic algorithm. Rough setwas used to extract the simplified decision rules of rock burst proneness. Taking the practical engineering for example, the rock burst proneness was evaluated and predicted bydecision rules. Comparing the prediction results with the actual results, it shows that theproposed method is feasible and effective.展开更多
Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete ...Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete problem and the genetic algorithm is an appropriate method to solve it. In order to achieve optimal discretization, first the choice of the initial set of cut set is discussed, because a good initial cut set can enhance the efficiency and quality of the follow-up algorithm. Second, an effective heuristic genetic algorithm for discretization of continuous attributes of the decision table is proposed, which takes the significance of cut dots as heuristic information and introduces a novel operator to maintain the indiscernibility of the original decision system and enhance the local research ability of the algorithm. So the algorithm converges quickly and has global optimizing ability. Finally, the effectiveness of the algorithm is validated through experiment.展开更多
This paper presents a hybrid soft computing modeling approach for a neurofuzzy system based on rough set theory and the genetic algorithms (NFRSGA). The fundamental problem of a neurofuzzy system is that when the inpu...This paper presents a hybrid soft computing modeling approach for a neurofuzzy system based on rough set theory and the genetic algorithms (NFRSGA). The fundamental problem of a neurofuzzy system is that when the input dimension increases, the fuzzy rule base increases exponentially. This leads to a huge infrastructure network which results in slow convergence. To solve this problem, rough set theory is used to obtain the reductive rules, which are used as fuzzy rules of the fuzzy system. The number of rules decrease, and each rule does not need all the conditional attribute values. This results in a reduced, or not fully connected, neural network. The structure of the neural network is relatively small and thus the weights to be trained decrease. The genetic algorithm is used to search the optimal discretization of the continuous attributes. The NFRSGA approach has been applied in the practical application of building a soft sensor model for estimating the freezing point of the light diesel fuel in a Fluid Catalytic Cracking Unit (FCCU), and satisfying results are obtained.展开更多
Polycrystalline materials are extensively employed in industry.Its surface roughness significantly affects the working performance.Material defects,particularly grain boundaries,have a great impact on the achieved sur...Polycrystalline materials are extensively employed in industry.Its surface roughness significantly affects the working performance.Material defects,particularly grain boundaries,have a great impact on the achieved surface roughness of polycrystalline materials.However,it is difficult to establish a purely theoretical model for surface roughness with consideration of the grain boundary effect using conventional analytical methods.In this work,a theoretical and deep learning hybrid model for predicting the surface roughness of diamond-turned polycrystalline materials is proposed.The kinematic–dynamic roughness component in relation to the tool profile duplication effect,work material plastic side flow,relative vibration between the diamond tool and workpiece,etc,is theoretically calculated.The material-defect roughness component is modeled with a cascade forward neural network.In the neural network,the ratio of maximum undeformed chip thickness to cutting edge radius RT S,work material properties(misorientation angle θ_(g) and grain size d_(g)),and spindle rotation speed n s are configured as input variables.The material-defect roughness component is set as the output variable.To validate the developed model,polycrystalline copper with a gradient distribution of grains prepared by friction stir processing is machined with various processing parameters and different diamond tools.Compared with the previously developed model,obvious improvement in the prediction accuracy is observed with this hybrid prediction model.Based on this model,the influences of different factors on the surface roughness of polycrystalline materials are discussed.The influencing mechanism of the misorientation angle and grain size is quantitatively analyzed.Two fracture modes,including transcrystalline and intercrystalline fractures at different RTS values,are observed.Meanwhile,optimal processing parameters are obtained with a simulated annealing algorithm.Cutting experiments are performed with the optimal parameters,and a flat surface finish with Sa 1.314 nm is finally achieved.The developed model and corresponding new findings in this work are beneficial for accurately predicting the surface roughness of polycrystalline materials and understanding the impacting mechanism of material defects in diamond turning.展开更多
In rough communication, because each agent has a different language and can not provide precise communication to each other, the concept translated among multi-agents will loss some information, and this results in a ...In rough communication, because each agent has a different language and can not provide precise communication to each other, the concept translated among multi-agents will loss some information, and this results in a less or rougher concept. With different translation sequences the amount of the missed knowledge is varied. The λ-optimal translation sequence of rough communication, which concerns both every agent and the last agent taking part in rough communication to get information as much as he (or she) can, is given. In order to get the λ-optimal translation sequence, a genetic algorithm is used. Analysis and simulation of the algorithm demonstrate the effectiveness of the approach.展开更多
The premise and basis of load modeling are substation load composition inquiries and cluster analyses.However,the traditional kernel fuzzy C-means(KFCM)algorithm is limited by artificial clustering number selection an...The premise and basis of load modeling are substation load composition inquiries and cluster analyses.However,the traditional kernel fuzzy C-means(KFCM)algorithm is limited by artificial clustering number selection and its convergence to local optimal solutions.To overcome these limitations,an improved KFCM algorithm with adaptive optimal clustering number selection is proposed in this paper.This algorithm optimizes the KFCM algorithm by combining the powerful global search ability of genetic algorithm and the robust local search ability of simulated annealing algorithm.The improved KFCM algorithm adaptively determines the ideal number of clusters using the clustering evaluation index ratio.Compared with the traditional KFCM algorithm,the enhanced KFCM algorithm has robust clustering and comprehensive abilities,enabling the efficient convergence to the global optimal solution.展开更多
In this paper we present a new optimization algorithm, and the proposed algorithm operates in two phases. In the first one, multiobjective version of genetic algorithm is used as search engine in order to generate app...In this paper we present a new optimization algorithm, and the proposed algorithm operates in two phases. In the first one, multiobjective version of genetic algorithm is used as search engine in order to generate approximate true Pareto front. This algorithm is based on concept of co-evolution and repair algorithm for handling nonlinear constraints. Also it maintains a finite-sized archive of non-dominated solutions which gets iteratively updated in the presence of new solutions based on the concept e-dominance. Then, in the second stage, rough set theory is adopted as local search engine in order to improve the spread of the solutions found so far. The results, provided by the proposed algorithm for benchmark problems, are promising when compared with exiting well-known algorithms. Also, our results suggest that our algorithm is better applicable for solving real-world application problems.展开更多
Aiming at the disadvantages of BP model in artificial neural networks applied to intelligent fault diagnosis, neural network fault diagnosis optimization method with rough sets and genetic algorithms are presented. Th...Aiming at the disadvantages of BP model in artificial neural networks applied to intelligent fault diagnosis, neural network fault diagnosis optimization method with rough sets and genetic algorithms are presented. The neural network nodes of the input layer can be calculated and simplified through rough sets theory; The neural network nodes of the middle layer are designed through genetic algorithms training; the neural network bottom-up weights and bias are obtained finally through the combination of genetic algorithms and BP algorithms. The analysis in this paper illustrates that the optimization method can improve the performance of the neural network fault diagnosis method greatly.展开更多
This paper presents a method to calibrate pipe roughness coefficient (i.e., Manning n-factor) with genetic algorithm (GA) under multiple loading conditions. Due to the old pipe age as well as deleting valves and blend...This paper presents a method to calibrate pipe roughness coefficient (i.e., Manning n-factor) with genetic algorithm (GA) under multiple loading conditions. Due to the old pipe age as well as deleting valves and blends in the skeleton of distribution network, most of the pipes in hydraulic model of practical water distribution system (WDS) are rough. The commonly used Hazen-Williams C-factor is therefore replaced by Manning n-factor in calibrating WDS hydraulic model. Adjustment to GA is designed, and the program efficiency is improved. A case study shows that the adjustment can save 60% of the total runtime. About 90% of the relative differences between simulated and observed pressures at monitoring locations are lower than 3%, which suggests that the proposed adjustment to the calibration is efficient and effective.展开更多
In this paper,we propose a Rough Set assisted Meta-Learning method on how to select the most-suited machine-learning algorithms with minimal effort for a new given dataset. A k-Nearest Neighbor (k-NN) algorithm is use...In this paper,we propose a Rough Set assisted Meta-Learning method on how to select the most-suited machine-learning algorithms with minimal effort for a new given dataset. A k-Nearest Neighbor (k-NN) algorithm is used to recognize the most similar datasets that have been performed by all of the candidate algorithms.By matching the most similar datasets we found,the corresponding performance of the candidate algorithms is used to generate recommendation to the user.The performance derives from a multi-criteria evaluation measure-ARR,which contains both accuracy and time.Furthermore,after applying Rough Set theory,we can find the redundant properties of the dataset.Thus,we can speed up the ranking process and increase the accuracy by using the reduct of the meta attributes.展开更多
文摘Suppressed fuzzy c-means (S-FCM) clustering algorithm with the intention of combining the higher speed of hard c-means clustering algorithm and the better classification performance of fuzzy c-means clustering algorithm had been studied by many researchers and applied in many fields. In the algorithm, how to select the suppressed rate is a key step. In this paper, we give a method to select the fixed suppressed rate by the structure of the data itself. The experimental results show that the proposed method is a suitable way to select the suppressed rate in suppressed fuzzy c-means clustering algorithm.
文摘Diabetic Retinopathy(DR)is a vision disease due to the long-term prevalenceof Diabetes Mellitus.It affects the retina of the eye and causes severedamage to the vision.If not treated on time it may lead to permanent vision lossin diabetic patients.Today’s development in science has no medication to cureDiabetic Retinopathy.However,if diagnosed at an early stage it can be controlledand permanent vision loss can be avoided.Compared to the diabetic population,experts to diagnose Diabetic Retinopathy are very less in particular to local areas.Hence an automatic computer-aided diagnosis for DR detection is necessary.Inthis paper,we propose an unsupervised clustering technique to automatically clusterthe DR into one of its five development stages.The deep learning based unsupervisedclustering is made to improve itself with the help of fuzzy rough c-meansclustering where cluster centers are updated by fuzzy rough c-means clusteringalgorithm during the forward pass and the deep learning model representationsare updated by Stochastic Gradient Descent during the backward pass of training.The proposed method was implemented using python and the results were takenon DGX server with Tesla V100 GPU cards.An experimental result on the publicallyavailable Kaggle dataset shows an overall accuracy of 88.7%.The proposedmodel improves the accuracy of DR diagnosis compared to the existingunsupervised algorithms like k-means,FCM,auto-encoder,and FRCM withalexnet.
文摘The aim of this paper is to present a distributed algorithm for big data classification, and its application for Magnetic Resonance Images (MRI) segmentation. We choose the well-known classification method which is the c-means method. The proposed method is introduced in order to perform a cognitive program which is assigned to be implemented on a parallel and distributed machine based on mobile agents. The main idea of the proposed algorithm is to execute the c-means classification procedure by the Mobile Classification Agents (Team Workers) on different nodes on their data at the same time and provide the results to their Mobile Host Agent (Team Leader) which computes the global results and orchestrates the classification until the convergence condition is achieved and the output segmented images will be provided from the Mobile Classification Agents. The data in our case are the big data MRI image of size (m × n) which is splitted into (m × n) elementary images one per mobile classification agent to perform the classification procedure. The experimental results show that the use of the distributed architecture improves significantly the big data segmentation efficiency.
基金Anhui Provincial University Research Project(Project Number:2023AH051659)Tongling University Talent Research Initiation Fund Project(Project Number:2022tlxyrc31)+1 种基金Tongling University School-Level Scientific Research Project(Project Number:2021tlxytwh05)Tongling University Horizontal Project(Project Number:2023tlxyxdz237)。
文摘For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm using discernment as the heuristic information was proposed.The reduction algorithm comprehensively considers the dependency degree and neighborhood granulation degree of attributes,allowing for a more accurate measurement of the importance degrees of attributes.Example analyses and experimental results demonstrate the feasibility and effectiveness of the algorithm.
文摘In the last decade, the MRI (Magnetic Resonance Imaging) image segmentation has become one of the most active research fields in the medical imaging domain. Because of the fuzzy nature of the MRI images, many researchers have adopted the fuzzy clustering approach to segment them. In this work, a fast and robust multi-agent system (MAS) for MRI segmentation of the brain is proposed. This system gets its robustness from a robust c-means algorithm (RFCM) and obtains its fastness from the beneficial properties of agents, such as autonomy, social ability and reactivity. To show the efficiency of the proposed method, we test it on a normal brain brought from the BrainWeb Simulated Brain Database. The experimental results are valuable in both robustness to noise and running times standpoints.
文摘Classifying the data into a meaningful group is one of the fundamental ways of understanding and learning the valuable information. High-quality clustering methods are necessary for the valuable and efficient analysis of the increasing data. The Firefly Algorithm (FA) is one of the bio-inspired algorithms and it is recently used to solve the clustering problems. In this paper, Hybrid F-Firefly algorithm is developed by combining the Fuzzy C-Means (FCM) with FA to improve the clustering accuracy with global optimum solution. The Hybrid F-Firefly algorithm is developed by incorporating FCM operator at the end of each iteration in FA algorithm. This proposed algorithm is designed to utilize the goodness of existing algorithm and to enhance the original FA algorithm by solving the shortcomings in the FCM algorithm like the trapping in local optima and sensitive to initial seed points. In this research work, the Hybrid F-Firefly algorithm is implemented and experimentally tested for various performance measures under six different benchmark datasets. From the experimental results, it is observed that the Hybrid F-Firefly algorithm significantly improves the intra-cluster distance when compared with the existing algorithms like K-means, FCM and FA algorithm.
文摘Knowledge reduction is an important issue when dealing with huge amounts of data. And it has been proved that computing the minimal reduct of decision system is NP-complete. By introducing heuristic information into genetic algorithm, we proposed a heuristic genetic algorithm. In the genetic algorithm, we constructed a new operator to maintaining the classification ability. The experiment shows that our algorithm is efficient and effective for minimal reduct, even for the special example that the simple heuristic algorithm can’t get the right result.
基金the Natural Science Foundation of Shandong Province (Y2006A12)the Scientific ResearchDevelopment Project of Shandong Provincial Education Department(J06P01)the Doctoral Foundation of University of Jinan(B0633).
文摘In rough communication, because each agent has a different language and cannot provide precise communication to each other, the concept translated among multi-agents will loss some information and this results in a less or rougher concept. With different translation sequences, the problem of information loss is varied. To get the translation sequence, in which the jth agent taking part in rough communication gets maximum information, a simulated annealing algorithm is used. Analysis and simulation of this algorithm demonstrate its effectiveness.
基金Project supported by the National Basic Research Program (973)of China (No. 2002CB312106), China Postdoctoral Science Founda-tion (No. 2004035715), the Science & Technology Program of Zhe-jiang Province (No. 2004C31098), and the Postdoctoral Foundation of Zhejiang Province (No. 2004-bsh-023), China
文摘Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough set theory has some particular characteristics. Consistency must be satisfied and cuts for discretization is expected to be as small as possible. Consistent and minimal discretization problem is NP-complete. In this paper, an immune algorithm for the problem is proposed. The correctness and effectiveness were shown in experiments. The discretization method presented in this paper can also be used as a data pre- treating step for other symbolic knowledge discovery or machine learning methods other than rough set theory.
基金supported by the UGC, SERO, Hyderabad under FDP during XI plan periodthe UGC, New Delhi for financial assistance under major research project Grant No. F-34-105/2008
文摘Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. The conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. In this paper, we propose a new unsupervised quick reduct (QR) algorithm using rough set theory. The quality of the reduced data is measured by the classification performance and it is evaluated using WEKA classifier tool. The method is compared with existing supervised methods and the result demonstrates the efficiency of the proposed algorithm.
基金Supported by the Youth Science Foundation of North China University of Water Conservancy and Electric Power(HSQJ2009016)
文摘A new method based on rough set theory and genetic algorithm was proposedto predict the rock burst proneness. Nine influencing factors were first selected, and then,the decision table was set up. Attributes were reduced by genetic algorithm. Rough setwas used to extract the simplified decision rules of rock burst proneness. Taking the practical engineering for example, the rock burst proneness was evaluated and predicted bydecision rules. Comparing the prediction results with the actual results, it shows that theproposed method is feasible and effective.
文摘Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete problem and the genetic algorithm is an appropriate method to solve it. In order to achieve optimal discretization, first the choice of the initial set of cut set is discussed, because a good initial cut set can enhance the efficiency and quality of the follow-up algorithm. Second, an effective heuristic genetic algorithm for discretization of continuous attributes of the decision table is proposed, which takes the significance of cut dots as heuristic information and introduces a novel operator to maintain the indiscernibility of the original decision system and enhance the local research ability of the algorithm. So the algorithm converges quickly and has global optimizing ability. Finally, the effectiveness of the algorithm is validated through experiment.
基金Sponsored by the National High Technology Research and Development Program of China (Grant No.G2001 AA413130).
文摘This paper presents a hybrid soft computing modeling approach for a neurofuzzy system based on rough set theory and the genetic algorithms (NFRSGA). The fundamental problem of a neurofuzzy system is that when the input dimension increases, the fuzzy rule base increases exponentially. This leads to a huge infrastructure network which results in slow convergence. To solve this problem, rough set theory is used to obtain the reductive rules, which are used as fuzzy rules of the fuzzy system. The number of rules decrease, and each rule does not need all the conditional attribute values. This results in a reduced, or not fully connected, neural network. The structure of the neural network is relatively small and thus the weights to be trained decrease. The genetic algorithm is used to search the optimal discretization of the continuous attributes. The NFRSGA approach has been applied in the practical application of building a soft sensor model for estimating the freezing point of the light diesel fuel in a Fluid Catalytic Cracking Unit (FCCU), and satisfying results are obtained.
基金National Natural Science Foundation of China(Nos.52175430,51935008 and 52105478)China National Postdoctoral Program for Innovative Talents(BX20200234)Open Fund of Tianjin Key Laboratory of Equipment Design and Manufacturing Technology(EDMT)for the support of this work。
文摘Polycrystalline materials are extensively employed in industry.Its surface roughness significantly affects the working performance.Material defects,particularly grain boundaries,have a great impact on the achieved surface roughness of polycrystalline materials.However,it is difficult to establish a purely theoretical model for surface roughness with consideration of the grain boundary effect using conventional analytical methods.In this work,a theoretical and deep learning hybrid model for predicting the surface roughness of diamond-turned polycrystalline materials is proposed.The kinematic–dynamic roughness component in relation to the tool profile duplication effect,work material plastic side flow,relative vibration between the diamond tool and workpiece,etc,is theoretically calculated.The material-defect roughness component is modeled with a cascade forward neural network.In the neural network,the ratio of maximum undeformed chip thickness to cutting edge radius RT S,work material properties(misorientation angle θ_(g) and grain size d_(g)),and spindle rotation speed n s are configured as input variables.The material-defect roughness component is set as the output variable.To validate the developed model,polycrystalline copper with a gradient distribution of grains prepared by friction stir processing is machined with various processing parameters and different diamond tools.Compared with the previously developed model,obvious improvement in the prediction accuracy is observed with this hybrid prediction model.Based on this model,the influences of different factors on the surface roughness of polycrystalline materials are discussed.The influencing mechanism of the misorientation angle and grain size is quantitatively analyzed.Two fracture modes,including transcrystalline and intercrystalline fractures at different RTS values,are observed.Meanwhile,optimal processing parameters are obtained with a simulated annealing algorithm.Cutting experiments are performed with the optimal parameters,and a flat surface finish with Sa 1.314 nm is finally achieved.The developed model and corresponding new findings in this work are beneficial for accurately predicting the surface roughness of polycrystalline materials and understanding the impacting mechanism of material defects in diamond turning.
基金supported by the National Natural Science Foundation of China(61070241)the Natural Science Foundation of Shandong Province(ZR2010FM035)+1 种基金the Science and Technology Foundation of University of Jinan(XKY1031XKY0808)
文摘In rough communication, because each agent has a different language and can not provide precise communication to each other, the concept translated among multi-agents will loss some information, and this results in a less or rougher concept. With different translation sequences the amount of the missed knowledge is varied. The λ-optimal translation sequence of rough communication, which concerns both every agent and the last agent taking part in rough communication to get information as much as he (or she) can, is given. In order to get the λ-optimal translation sequence, a genetic algorithm is used. Analysis and simulation of the algorithm demonstrate the effectiveness of the approach.
基金supported by the Planning Special Project of Guangdong Power Grid Co.,Ltd.:“Study on load modeling based on total measurement and discrimination method suitable for system characteristic analysis and calculation during the implementation of target grid in Guangdong power grid”(0319002022030203JF00023).
文摘The premise and basis of load modeling are substation load composition inquiries and cluster analyses.However,the traditional kernel fuzzy C-means(KFCM)algorithm is limited by artificial clustering number selection and its convergence to local optimal solutions.To overcome these limitations,an improved KFCM algorithm with adaptive optimal clustering number selection is proposed in this paper.This algorithm optimizes the KFCM algorithm by combining the powerful global search ability of genetic algorithm and the robust local search ability of simulated annealing algorithm.The improved KFCM algorithm adaptively determines the ideal number of clusters using the clustering evaluation index ratio.Compared with the traditional KFCM algorithm,the enhanced KFCM algorithm has robust clustering and comprehensive abilities,enabling the efficient convergence to the global optimal solution.
文摘In this paper we present a new optimization algorithm, and the proposed algorithm operates in two phases. In the first one, multiobjective version of genetic algorithm is used as search engine in order to generate approximate true Pareto front. This algorithm is based on concept of co-evolution and repair algorithm for handling nonlinear constraints. Also it maintains a finite-sized archive of non-dominated solutions which gets iteratively updated in the presence of new solutions based on the concept e-dominance. Then, in the second stage, rough set theory is adopted as local search engine in order to improve the spread of the solutions found so far. The results, provided by the proposed algorithm for benchmark problems, are promising when compared with exiting well-known algorithms. Also, our results suggest that our algorithm is better applicable for solving real-world application problems.
文摘Aiming at the disadvantages of BP model in artificial neural networks applied to intelligent fault diagnosis, neural network fault diagnosis optimization method with rough sets and genetic algorithms are presented. The neural network nodes of the input layer can be calculated and simplified through rough sets theory; The neural network nodes of the middle layer are designed through genetic algorithms training; the neural network bottom-up weights and bias are obtained finally through the combination of genetic algorithms and BP algorithms. The analysis in this paper illustrates that the optimization method can improve the performance of the neural network fault diagnosis method greatly.
基金Supported by National Natural Science Foundation of China (No 50778121)Science and Technology Innovation Special Foundation of Tianjin (NO 06FZZDSH00900)
文摘This paper presents a method to calibrate pipe roughness coefficient (i.e., Manning n-factor) with genetic algorithm (GA) under multiple loading conditions. Due to the old pipe age as well as deleting valves and blends in the skeleton of distribution network, most of the pipes in hydraulic model of practical water distribution system (WDS) are rough. The commonly used Hazen-Williams C-factor is therefore replaced by Manning n-factor in calibrating WDS hydraulic model. Adjustment to GA is designed, and the program efficiency is improved. A case study shows that the adjustment can save 60% of the total runtime. About 90% of the relative differences between simulated and observed pressures at monitoring locations are lower than 3%, which suggests that the proposed adjustment to the calibration is efficient and effective.
文摘In this paper,we propose a Rough Set assisted Meta-Learning method on how to select the most-suited machine-learning algorithms with minimal effort for a new given dataset. A k-Nearest Neighbor (k-NN) algorithm is used to recognize the most similar datasets that have been performed by all of the candidate algorithms.By matching the most similar datasets we found,the corresponding performance of the candidate algorithms is used to generate recommendation to the user.The performance derives from a multi-criteria evaluation measure-ARR,which contains both accuracy and time.Furthermore,after applying Rough Set theory,we can find the redundant properties of the dataset.Thus,we can speed up the ranking process and increase the accuracy by using the reduct of the meta attributes.