Particle swarm optimization (PSO) is a new heuristic algorithm which has been applied to many optimization problems successfully. Attribute reduction is a key studying point of the rough set theory, and it has been ...Particle swarm optimization (PSO) is a new heuristic algorithm which has been applied to many optimization problems successfully. Attribute reduction is a key studying point of the rough set theory, and it has been proven that computing minimal reduc- tion of decision tables is a non-derterministic polynomial (NP)-hard problem. A new cooperative extended attribute reduction algorithm named Co-PSAR based on improved PSO is proposed, in which the cooperative evolutionary strategy with suitable fitness func- tions is involved to learn a good hypothesis for accelerating the optimization of searching minimal attribute reduction. Experiments on Benchmark functions and University of California, Irvine (UCI) data sets, compared with other algorithms, verify the superiority of the Co-PSAR algorithm in terms of the convergence speed, efficiency and accuracy for the attribute reduction.展开更多
Attribute reduction is a hot topic in rough set research. As an extension of rough sets, neighborhood rough sets can effectively solve the problem of information loss after data discretization. However, traditional gr...Attribute reduction is a hot topic in rough set research. As an extension of rough sets, neighborhood rough sets can effectively solve the problem of information loss after data discretization. However, traditional greedy-based neighborhood rough set attribute reduction algorithms have a high computational complexity and long processing time. In this paper, a novel attribute reduction algorithm based on attribute importance is proposed. By using conditional information, the attribute reduction problem in neighborhood rough sets is discussed, and the importance of attributes is measured by conditional information gain. The algorithm iteratively removes the attribute with the lowest importance, thus achieving the goal of attribute reduction. Six groups of UCI datasets are selected, and the proposed algorithm SAR is compared with L<sub>2</sub>-ELM, LapTELM, CTSVM, and TBSVM classifiers. The results demonstrate that SAR can effectively improve the time consumption and accuracy issues in attribute reduction.展开更多
The logging attribute optimization is an important task in the well-logging interpretation. A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general red...The logging attribute optimization is an important task in the well-logging interpretation. A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general reductive method is determined. Then, the significance of dispensable attribute in the reduction-table is calculated. Finally, the minimum relative reduction set is achieved. The typical calculation and quantitative computation of reservoir parameter in oil logging show that the method of attribute reduction is greatly effective and feasible in logging interpretation.展开更多
The Rough Sets Theory is used in data mining with emphasis on the treatment of uncertain or vague information. In the case of classification, this theory implicitly calculates reducts of the full set of attributes, el...The Rough Sets Theory is used in data mining with emphasis on the treatment of uncertain or vague information. In the case of classification, this theory implicitly calculates reducts of the full set of attributes, eliminating those that are redundant or meaningless. Such reducts may even serve as input to other classifiers other than Rough Sets. The typical high dimensionality of current databases precludes the use of greedy methods to find optimal or suboptimal reducts in the search space and requires the use of stochastic methods. In this context, the calculation of reducts is typically performed by a genetic algorithm, but other metaheuristics have been proposed with better performance. This work proposes the innovative use of two known metaheuristics for this calculation, the Variable Neighborhood Search, the Variable Neighborhood Descent, besides a third heuristic called Decrescent Cardinality Search. The last one is a new heuristic specifically proposed for reduct calculation. Considering some databases commonly found in the literature of the area, the reducts that have been obtained present lower cardinality, i.e., a lower number of attributes.展开更多
Attribute reduction,as one of the essential applications of the rough set,has attracted extensive attention from scholars.Information granulation is a key step of attribute reduction,and its efficiency has a significa...Attribute reduction,as one of the essential applications of the rough set,has attracted extensive attention from scholars.Information granulation is a key step of attribute reduction,and its efficiency has a significant impact on the overall efficiency of attribute reduction.The information granulation of the existing neighborhood rough set models is usually a single layer,and the construction of each information granule needs to search all the samples in the universe,which is inefficient.To fill such gap,a new neighborhood rough set model is proposed,which aims to improve the efficiency of attribute reduction by means of two-layer information granulation.The first layer of information granulation constructs a mapping-equivalence relation that divides the universe into multiple mutually independent mapping-equivalence classes.The second layer of information granulation views each mapping-equivalence class as a sub-universe and then performs neighborhood informa-tion granulation.A model named mapping-equivalence neighborhood rough set model is derived from the strategy of two-layer information granulation.Experimental results show that compared with other neighborhood rough set models,this model can effectively improve the efficiency of attribute reduction and reduce the uncertainty of the system.The strategy provides a new thinking for the exploration of neighborhood rough set models and the study of attribute reduction acceleration problems.展开更多
A support vector machine (SVM) ensemble classifier is proposed. Performance of SVM trained in an input space eonsisting of all the information from many sources is not always good. The strategy that the original inp...A support vector machine (SVM) ensemble classifier is proposed. Performance of SVM trained in an input space eonsisting of all the information from many sources is not always good. The strategy that the original input space is partitioned into several input subspaces usually works for improving the performance. Different from conventional partition methods, the partition method used in this paper, rough sets theory based attribute reduction, allows the input subspaces partially overlapped. These input subspaces can offer complementary information about hidden data patterns. In every subspace, an SVM sub-classifier is learned. With the information fusion techniques, those SVM sub-classifiers with better performance are selected and combined to construct an SVM ensemble. The proposed method is applied to decision-making of medical diagnosis. Comparison of performance between our method and several other popular ensemble methods is done. Experimental results demonstrate that our proposed approach can make full use of the information contained in data and improve the decision-making performance.展开更多
Covering rough sets are improvements of traditional rough sets by considering cover of universe instead of partition.In this paper,we develop several measures based on evidence theory to characterize covering rough se...Covering rough sets are improvements of traditional rough sets by considering cover of universe instead of partition.In this paper,we develop several measures based on evidence theory to characterize covering rough sets.First,we present belief and plausibility functions in covering information systems and study their properties.With these measures we characterize lower and upper approximation operators and attribute reductions in covering information systems and decision systems respectively.With these discussions we propose a basic framework of numerical characterizations of covering rough sets.展开更多
The quantity of well logging data is increasing exponentially, hence methods of extracting the useful information or attribution from the logging database are becoming very important in logging interpretation. So, the...The quantity of well logging data is increasing exponentially, hence methods of extracting the useful information or attribution from the logging database are becoming very important in logging interpretation. So, the method of logging attribute reduction is presented based on a rough set, i.e., first determining the core of the information table, then calculating the significance of each attribute, and finally obtaining the relative reduction table. The application result shows that the method of attribute reduction is feasible and can be used for optimizing logging attributes, and decreasing redundant logging information to a great extent.展开更多
Classical rough set has a limited processing capacity in fuzzy decision table. Combining fuzzy set with classical rough set,attribute reduction algorithm on fuzzy decision table is studied. First,new similarity degree...Classical rough set has a limited processing capacity in fuzzy decision table. Combining fuzzy set with classical rough set,attribute reduction algorithm on fuzzy decision table is studied. First,new similarity degree and new similarity category are defined. In the meantime,similarity category clusters which are divided by condition attribute are provided. And then,two theorems are presented. Subsequently,a new attribute reduction algorithm is proposed. Finally,the new attribute reduction algorithm is verified through a performance evaluation decision table of the self-repairing flight-control system. The result shows the proposed attribute reduction algorithm is able to deal with fuzzy decision table to a certain extent.展开更多
The original fault data of oil immersed transformer often contains a large number of unnecessary attributes,which greatly increases the elapsed time of the algorithm and reduces the classification accuracy,leading to ...The original fault data of oil immersed transformer often contains a large number of unnecessary attributes,which greatly increases the elapsed time of the algorithm and reduces the classification accuracy,leading to the rise of the diagnosis error rate.Therefore,in order to obtain high quality oil immersed transformer fault attribute data sets,an improved imperialist competitive algorithm was proposed to optimize the rough set to discretize the original fault data set and the attribute reduction.The feasibility of the proposed algorithm was verified by experiments and compared with other intelligent algorithms.Results show that the algorithm was stable at the 27th iteration with a reduction rate of 56.25%and a reduction accuracy of 98%.By using BP neural network to classify the reduction results,the accuracy was 86.25%,and the overall effect was better than those of the original data and other algorithms.Hence,the proposed method is effective for fault attribute reduction of oil immersed transformer.展开更多
For garment or fabric appearance, the cloth smoothness grade is one of the most important performance factors in textile and garment community. In this paper, on the base of Rough Set Theory,a new objective method for...For garment or fabric appearance, the cloth smoothness grade is one of the most important performance factors in textile and garment community. In this paper, on the base of Rough Set Theory,a new objective method for fabric smoothness grade evaluation was constructed. The objective smoothness grading model took the parameters of 120 AATCC replicas' point-sampled models as the conditional attributes and formed the smoothness grading decision table. Then, NS discretization method and genetic algorithm reduction method were used in the attributes discretization and feature reduction. Finally, the grading model was expressed as simple and intuitional classification rules. The simulation results show the validity of the fabric smoothness grading system which is built on the use of rough sets.展开更多
It is well-known that attribute reduction is a crucial action of rough set.The significant characteristic of attribute reduction is that it can reduce the dimensions of data with clear semantic explanations.Normally,t...It is well-known that attribute reduction is a crucial action of rough set.The significant characteristic of attribute reduction is that it can reduce the dimensions of data with clear semantic explanations.Normally,the learning performance of attributes in derived reduct is much more crucial.Since related measures of rough set dominate the whole process of identifying qualified attributes and deriving reduct,those measures may have a direct impact on the performance of selected attributes in reduct.However,most previous researches about attribute reduction take measures related to either supervised perspective or unsupervised perspective,which are insufficient to identify attributes with superior learning performance,such as stability and accuracy.In order to improve the classification stability and classification accuracy of reduct,in this paper,a novel measure is proposed based on the fusion of supervised and unsupervised perspectives:(1)in terms of supervised perspective,approximation quality is helpful in quantitatively characterizing the relationship between attributes and labels;(2)in terms of unsupervised perspective,conditional entropy is helpful in quantitatively describing the internal structure of data itself.In order to prove the effectiveness of the proposed measure,18 University of CaliforniaIrvine(UCI)datasets and 2 Yale face datasets have been employed in the comparative experiments.Finally,the experimental results show that the proposed measure does well in selecting attributes which can provide distinguished classification stabilities and classification accuracies.展开更多
The comprehensive evaluation method of enterprise core competitiveness is proposed by combining rough sets and gray correlation theories. Firstly,the initial index is screened through rough set attribute reduction alg...The comprehensive evaluation method of enterprise core competitiveness is proposed by combining rough sets and gray correlation theories. Firstly,the initial index is screened through rough set attribute reduction algorithm,and the evaluation weight of each index is obtained through the rough set theory. Then,based on the gray correlation theory, an evaluation model is built for empirical analysis. The 30 financial institutions on the Yangtze River Delta are examined from the theoretical and empirical perspective.The result demonstrates not only the feasibility of rough set attribute reduction algorithm in the core competitiveness index system of the financial institution,but also the accuracy of the combination of these two methods in the comprehensive evaluation of corporate core competitiveness.展开更多
For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm u...For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm using discernment as the heuristic information was proposed.The reduction algorithm comprehensively considers the dependency degree and neighborhood granulation degree of attributes,allowing for a more accurate measurement of the importance degrees of attributes.Example analyses and experimental results demonstrate the feasibility and effectiveness of the algorithm.展开更多
Attribute reduction through the combined approach of Rough Sets(RS)and algebraic topology is an open research topic with significant potential for applications.Several research works have introduced a strong relations...Attribute reduction through the combined approach of Rough Sets(RS)and algebraic topology is an open research topic with significant potential for applications.Several research works have introduced a strong relationship between RS and topology spaces for the attribute reduction problem.However,the mentioned recent methods followed a strategy to construct a new measure for attribute selection.Meanwhile,the strategy for searching for the reduct is still to select each attribute and gradually add it to the reduct.Consequently,those methods tended to be inefficient for high-dimensional datasets.To overcome these challenges,we use the separability property of Hausdorff topology to quickly identify distinguishable attributes,this approach significantly reduces the time for the attribute filtering stage of the algorithm.In addition,we propose the concept of Hausdorff topological homomorphism to construct candidate reducts,this method significantly reduces the number of candidate reducts for the wrapper stage of the algorithm.These are the two main stages that have the most effect on reducing computing time for the attribute reduction of the proposed algorithm,which we call the Cluster Filter Wrapper algorithm based on Hausdorff Topology.Experimental validation on the UCI Machine Learning Repository Data shows that the proposed method achieves efficiency in both the execution time and the size of the reduct.展开更多
Several strategies for the minimal attribute reduction with polynomial time complexity (O(nk)) have been developed in rough set theory. Are they complete? While investigating the attribute reduction strategy based on ...Several strategies for the minimal attribute reduction with polynomial time complexity (O(nk)) have been developed in rough set theory. Are they complete? While investigating the attribute reduction strategy based on the discernibility matrix (DM),a counterexample is constructed theoretically, which demonstrates that these strategies are all incomplete with respect to the minimal reduction.展开更多
The concept of a consistent approximation representation space is introduced. Many types of information systems can be treated and unified as consistent approximation representation spaces. At the same time, under the...The concept of a consistent approximation representation space is introduced. Many types of information systems can be treated and unified as consistent approximation representation spaces. At the same time, under the framework of this space, the judgment theorem for determining consistent attribute set is established, from which we can obtain the approach to attribute reductions in information systems. Also, the characterizations of three important types of attribute sets (the core attribute set, the relative necessary attribute set and the unnecessary attribute set) are examined.展开更多
This paper proposes,from the viewpoint of relation matrix,a new algorithm of attribute reduction for decision systems.Two new and relative reasonable indices are first defined to measure significance of the attributes...This paper proposes,from the viewpoint of relation matrix,a new algorithm of attribute reduction for decision systems.Two new and relative reasonable indices are first defined to measure significance of the attributes in decision systems and then a heuristic algorithm of attribute reduction is formulated.Moreover,the time complexity of the algorithm is analyzed and it is proved to be complete.Some numerical experiments are also conducted to access the performance of the presented algorithm and the results demonstrate that it is not only effective but also efficient.展开更多
基金supported by the National Natural Science Foundation of China (60873069 61171132)+3 种基金the Jiangsu Government Scholarship for Overseas Studies (JS-2010-K005)the Funding of Jiangsu Innovation Program for Graduate Education (CXZZ11 0219)the Open Project Program of Jiangsu Provincial Key Laboratory of Computer Information Processing Technology (KJS1023)the Applying Study Foundation of Nantong (BK2011062)
文摘Particle swarm optimization (PSO) is a new heuristic algorithm which has been applied to many optimization problems successfully. Attribute reduction is a key studying point of the rough set theory, and it has been proven that computing minimal reduc- tion of decision tables is a non-derterministic polynomial (NP)-hard problem. A new cooperative extended attribute reduction algorithm named Co-PSAR based on improved PSO is proposed, in which the cooperative evolutionary strategy with suitable fitness func- tions is involved to learn a good hypothesis for accelerating the optimization of searching minimal attribute reduction. Experiments on Benchmark functions and University of California, Irvine (UCI) data sets, compared with other algorithms, verify the superiority of the Co-PSAR algorithm in terms of the convergence speed, efficiency and accuracy for the attribute reduction.
文摘Attribute reduction is a hot topic in rough set research. As an extension of rough sets, neighborhood rough sets can effectively solve the problem of information loss after data discretization. However, traditional greedy-based neighborhood rough set attribute reduction algorithms have a high computational complexity and long processing time. In this paper, a novel attribute reduction algorithm based on attribute importance is proposed. By using conditional information, the attribute reduction problem in neighborhood rough sets is discussed, and the importance of attributes is measured by conditional information gain. The algorithm iteratively removes the attribute with the lowest importance, thus achieving the goal of attribute reduction. Six groups of UCI datasets are selected, and the proposed algorithm SAR is compared with L<sub>2</sub>-ELM, LapTELM, CTSVM, and TBSVM classifiers. The results demonstrate that SAR can effectively improve the time consumption and accuracy issues in attribute reduction.
基金Supported by the National Natural Science Foundation of China (No.60308002)
文摘The logging attribute optimization is an important task in the well-logging interpretation. A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general reductive method is determined. Then, the significance of dispensable attribute in the reduction-table is calculated. Finally, the minimum relative reduction set is achieved. The typical calculation and quantitative computation of reservoir parameter in oil logging show that the method of attribute reduction is greatly effective and feasible in logging interpretation.
文摘The Rough Sets Theory is used in data mining with emphasis on the treatment of uncertain or vague information. In the case of classification, this theory implicitly calculates reducts of the full set of attributes, eliminating those that are redundant or meaningless. Such reducts may even serve as input to other classifiers other than Rough Sets. The typical high dimensionality of current databases precludes the use of greedy methods to find optimal or suboptimal reducts in the search space and requires the use of stochastic methods. In this context, the calculation of reducts is typically performed by a genetic algorithm, but other metaheuristics have been proposed with better performance. This work proposes the innovative use of two known metaheuristics for this calculation, the Variable Neighborhood Search, the Variable Neighborhood Descent, besides a third heuristic called Decrescent Cardinality Search. The last one is a new heuristic specifically proposed for reduct calculation. Considering some databases commonly found in the literature of the area, the reducts that have been obtained present lower cardinality, i.e., a lower number of attributes.
基金supported by the National Natural Science Foundation of China (Nos.62006099,62076111)the Key Laboratory of Oceanographic Big Data Mining&Application of Zhejiang Province (No.OBDMA202104).
文摘Attribute reduction,as one of the essential applications of the rough set,has attracted extensive attention from scholars.Information granulation is a key step of attribute reduction,and its efficiency has a significant impact on the overall efficiency of attribute reduction.The information granulation of the existing neighborhood rough set models is usually a single layer,and the construction of each information granule needs to search all the samples in the universe,which is inefficient.To fill such gap,a new neighborhood rough set model is proposed,which aims to improve the efficiency of attribute reduction by means of two-layer information granulation.The first layer of information granulation constructs a mapping-equivalence relation that divides the universe into multiple mutually independent mapping-equivalence classes.The second layer of information granulation views each mapping-equivalence class as a sub-universe and then performs neighborhood informa-tion granulation.A model named mapping-equivalence neighborhood rough set model is derived from the strategy of two-layer information granulation.Experimental results show that compared with other neighborhood rough set models,this model can effectively improve the efficiency of attribute reduction and reduce the uncertainty of the system.The strategy provides a new thinking for the exploration of neighborhood rough set models and the study of attribute reduction acceleration problems.
基金Supported by the High Technology Research and Development Programme of China (2002AA412010), and the National Key Basic Research and Development Program of China (2002cb312200) and the National Natural Science Foundation of China (60174038).
文摘A support vector machine (SVM) ensemble classifier is proposed. Performance of SVM trained in an input space eonsisting of all the information from many sources is not always good. The strategy that the original input space is partitioned into several input subspaces usually works for improving the performance. Different from conventional partition methods, the partition method used in this paper, rough sets theory based attribute reduction, allows the input subspaces partially overlapped. These input subspaces can offer complementary information about hidden data patterns. In every subspace, an SVM sub-classifier is learned. With the information fusion techniques, those SVM sub-classifiers with better performance are selected and combined to construct an SVM ensemble. The proposed method is applied to decision-making of medical diagnosis. Comparison of performance between our method and several other popular ensemble methods is done. Experimental results demonstrate that our proposed approach can make full use of the information contained in data and improve the decision-making performance.
基金supported by a grant of NSFC(70871036)a grant of National Basic Research Program of China(2009CB219801-3)
文摘Covering rough sets are improvements of traditional rough sets by considering cover of universe instead of partition.In this paper,we develop several measures based on evidence theory to characterize covering rough sets.First,we present belief and plausibility functions in covering information systems and study their properties.With these measures we characterize lower and upper approximation operators and attribute reductions in covering information systems and decision systems respectively.With these discussions we propose a basic framework of numerical characterizations of covering rough sets.
文摘The quantity of well logging data is increasing exponentially, hence methods of extracting the useful information or attribution from the logging database are becoming very important in logging interpretation. So, the method of logging attribute reduction is presented based on a rough set, i.e., first determining the core of the information table, then calculating the significance of each attribute, and finally obtaining the relative reduction table. The application result shows that the method of attribute reduction is feasible and can be used for optimizing logging attributes, and decreasing redundant logging information to a great extent.
基金supported by the Foundation and Frontier Technologies Research Plan Projects of Henan Province of China under Grant No. 102300410266
文摘Classical rough set has a limited processing capacity in fuzzy decision table. Combining fuzzy set with classical rough set,attribute reduction algorithm on fuzzy decision table is studied. First,new similarity degree and new similarity category are defined. In the meantime,similarity category clusters which are divided by condition attribute are provided. And then,two theorems are presented. Subsequently,a new attribute reduction algorithm is proposed. Finally,the new attribute reduction algorithm is verified through a performance evaluation decision table of the self-repairing flight-control system. The result shows the proposed attribute reduction algorithm is able to deal with fuzzy decision table to a certain extent.
基金Sponsored by the National Natural Science Foundation of China(Grant No.51504085)the Natural Science Foundation for Returness of Heilongjiang Province of China(Grant No.LC2017026).
文摘The original fault data of oil immersed transformer often contains a large number of unnecessary attributes,which greatly increases the elapsed time of the algorithm and reduces the classification accuracy,leading to the rise of the diagnosis error rate.Therefore,in order to obtain high quality oil immersed transformer fault attribute data sets,an improved imperialist competitive algorithm was proposed to optimize the rough set to discretize the original fault data set and the attribute reduction.The feasibility of the proposed algorithm was verified by experiments and compared with other intelligent algorithms.Results show that the algorithm was stable at the 27th iteration with a reduction rate of 56.25%and a reduction accuracy of 98%.By using BP neural network to classify the reduction results,the accuracy was 86.25%,and the overall effect was better than those of the original data and other algorithms.Hence,the proposed method is effective for fault attribute reduction of oil immersed transformer.
文摘For garment or fabric appearance, the cloth smoothness grade is one of the most important performance factors in textile and garment community. In this paper, on the base of Rough Set Theory,a new objective method for fabric smoothness grade evaluation was constructed. The objective smoothness grading model took the parameters of 120 AATCC replicas' point-sampled models as the conditional attributes and formed the smoothness grading decision table. Then, NS discretization method and genetic algorithm reduction method were used in the attributes discretization and feature reduction. Finally, the grading model was expressed as simple and intuitional classification rules. The simulation results show the validity of the fabric smoothness grading system which is built on the use of rough sets.
基金supported by the National Natural Science Foundation of China(Grant Nos.62006099,62076111)the Key Research and Development Program of Zhenjiang-Social Development(Grant No.SH2018005)+1 种基金the Natural Science Foundation of Jiangsu Higher Education(Grant No.17KJB520007)Industry-school Cooperative Education Program of the Ministry of Education(Grant No.202101363034).
文摘It is well-known that attribute reduction is a crucial action of rough set.The significant characteristic of attribute reduction is that it can reduce the dimensions of data with clear semantic explanations.Normally,the learning performance of attributes in derived reduct is much more crucial.Since related measures of rough set dominate the whole process of identifying qualified attributes and deriving reduct,those measures may have a direct impact on the performance of selected attributes in reduct.However,most previous researches about attribute reduction take measures related to either supervised perspective or unsupervised perspective,which are insufficient to identify attributes with superior learning performance,such as stability and accuracy.In order to improve the classification stability and classification accuracy of reduct,in this paper,a novel measure is proposed based on the fusion of supervised and unsupervised perspectives:(1)in terms of supervised perspective,approximation quality is helpful in quantitatively characterizing the relationship between attributes and labels;(2)in terms of unsupervised perspective,conditional entropy is helpful in quantitatively describing the internal structure of data itself.In order to prove the effectiveness of the proposed measure,18 University of CaliforniaIrvine(UCI)datasets and 2 Yale face datasets have been employed in the comparative experiments.Finally,the experimental results show that the proposed measure does well in selecting attributes which can provide distinguished classification stabilities and classification accuracies.
文摘The comprehensive evaluation method of enterprise core competitiveness is proposed by combining rough sets and gray correlation theories. Firstly,the initial index is screened through rough set attribute reduction algorithm,and the evaluation weight of each index is obtained through the rough set theory. Then,based on the gray correlation theory, an evaluation model is built for empirical analysis. The 30 financial institutions on the Yangtze River Delta are examined from the theoretical and empirical perspective.The result demonstrates not only the feasibility of rough set attribute reduction algorithm in the core competitiveness index system of the financial institution,but also the accuracy of the combination of these two methods in the comprehensive evaluation of corporate core competitiveness.
基金Anhui Provincial University Research Project(Project Number:2023AH051659)Tongling University Talent Research Initiation Fund Project(Project Number:2022tlxyrc31)+1 种基金Tongling University School-Level Scientific Research Project(Project Number:2021tlxytwh05)Tongling University Horizontal Project(Project Number:2023tlxyxdz237)。
文摘For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm using discernment as the heuristic information was proposed.The reduction algorithm comprehensively considers the dependency degree and neighborhood granulation degree of attributes,allowing for a more accurate measurement of the importance degrees of attributes.Example analyses and experimental results demonstrate the feasibility and effectiveness of the algorithm.
基金funded by Vietnam National Foundation for Science and Technology Development(NAFOSTED)under Grant Number 102.05-2021.10.
文摘Attribute reduction through the combined approach of Rough Sets(RS)and algebraic topology is an open research topic with significant potential for applications.Several research works have introduced a strong relationship between RS and topology spaces for the attribute reduction problem.However,the mentioned recent methods followed a strategy to construct a new measure for attribute selection.Meanwhile,the strategy for searching for the reduct is still to select each attribute and gradually add it to the reduct.Consequently,those methods tended to be inefficient for high-dimensional datasets.To overcome these challenges,we use the separability property of Hausdorff topology to quickly identify distinguishable attributes,this approach significantly reduces the time for the attribute filtering stage of the algorithm.In addition,we propose the concept of Hausdorff topological homomorphism to construct candidate reducts,this method significantly reduces the number of candidate reducts for the wrapper stage of the algorithm.These are the two main stages that have the most effect on reducing computing time for the attribute reduction of the proposed algorithm,which we call the Cluster Filter Wrapper algorithm based on Hausdorff Topology.Experimental validation on the UCI Machine Learning Repository Data shows that the proposed method achieves efficiency in both the execution time and the size of the reduct.
文摘Several strategies for the minimal attribute reduction with polynomial time complexity (O(nk)) have been developed in rough set theory. Are they complete? While investigating the attribute reduction strategy based on the discernibility matrix (DM),a counterexample is constructed theoretically, which demonstrates that these strategies are all incomplete with respect to the minimal reduction.
基金Major State Basic Research Development Program of China (973 Program) (Grant No. 2002CB312200)the National Natu-ral Science Foundation of China (Grant Nos. 60673096 and 60373078)
文摘The concept of a consistent approximation representation space is introduced. Many types of information systems can be treated and unified as consistent approximation representation spaces. At the same time, under the framework of this space, the judgment theorem for determining consistent attribute set is established, from which we can obtain the approach to attribute reductions in information systems. Also, the characterizations of three important types of attribute sets (the core attribute set, the relative necessary attribute set and the unnecessary attribute set) are examined.
基金supported by grants from the National Natural Science Foundation of China(No.70861001)the Natural Science Foundation of Hainan Province in China(No.109005).
文摘This paper proposes,from the viewpoint of relation matrix,a new algorithm of attribute reduction for decision systems.Two new and relative reasonable indices are first defined to measure significance of the attributes in decision systems and then a heuristic algorithm of attribute reduction is formulated.Moreover,the time complexity of the algorithm is analyzed and it is proved to be complete.Some numerical experiments are also conducted to access the performance of the presented algorithm and the results demonstrate that it is not only effective but also efficient.