一些经典降维算法并不是最优的降维策略,它们不再适用于流形式且大尺度的Web文本数据,因此提出了一种加权的增量式有监督的降维算法,称为加权的增量式极大边界准则(Weighted Incremental Maximum Margin Criterion,WIMMC)。WIMMC通过加...一些经典降维算法并不是最优的降维策略,它们不再适用于流形式且大尺度的Web文本数据,因此提出了一种加权的增量式有监督的降维算法,称为加权的增量式极大边界准则(Weighted Incremental Maximum Margin Criterion,WIMMC)。WIMMC通过加权得到比传统算法更好的结果,而且可以增量地有监督地处理大尺度的Web文本数据。给出了算法的收敛性证明和一些实验,并从实验结果可以看出,通过WIMMC降维之后的分类效果比其他降维算法更有效。展开更多
This paper presents a new inductive learning algorithm, HGR (Version 2.0), based on the newly-developed extension matrix theory. The basic idea is to partition the positive examples of a specific class in a given exam...This paper presents a new inductive learning algorithm, HGR (Version 2.0), based on the newly-developed extension matrix theory. The basic idea is to partition the positive examples of a specific class in a given example set into consistent groups, and each group corresponds to a consistent rule which covers all the examples in this group and none of the negative examples. Then a performance comparison of the HGR algorithm with other inductive algorithms, such as C4.5, OC1, HCV and SVM, is given in the paper. The authors not only selected 15 databases from the famous UCI machine learning repository, but also considered a real world problem. Experimental results show that their method achieves higher accuracy and fewer rules as compared with other algorithms.展开更多
文摘一些经典降维算法并不是最优的降维策略,它们不再适用于流形式且大尺度的Web文本数据,因此提出了一种加权的增量式有监督的降维算法,称为加权的增量式极大边界准则(Weighted Incremental Maximum Margin Criterion,WIMMC)。WIMMC通过加权得到比传统算法更好的结果,而且可以增量地有监督地处理大尺度的Web文本数据。给出了算法的收敛性证明和一些实验,并从实验结果可以看出,通过WIMMC降维之后的分类效果比其他降维算法更有效。
文摘This paper presents a new inductive learning algorithm, HGR (Version 2.0), based on the newly-developed extension matrix theory. The basic idea is to partition the positive examples of a specific class in a given example set into consistent groups, and each group corresponds to a consistent rule which covers all the examples in this group and none of the negative examples. Then a performance comparison of the HGR algorithm with other inductive algorithms, such as C4.5, OC1, HCV and SVM, is given in the paper. The authors not only selected 15 databases from the famous UCI machine learning repository, but also considered a real world problem. Experimental results show that their method achieves higher accuracy and fewer rules as compared with other algorithms.