在传统K-NN分类中,对于每个待测样本均需计算并寻找k个决策近邻,分类效率较低。针对该问题,提出一种双层结构的加速K-NN分类(K-NN classification based on double-layer structure,KNN_DL)方法。将正类和负类样本分别划分为多个不同子...在传统K-NN分类中,对于每个待测样本均需计算并寻找k个决策近邻,分类效率较低。针对该问题,提出一种双层结构的加速K-NN分类(K-NN classification based on double-layer structure,KNN_DL)方法。将正类和负类样本分别划分为多个不同子集,计算每个子集的中心和半径。当新样本进入时,选择k个决策近邻子集,若其具有相同的类别标签,将该样本标记为相应类别;反之,选择决策近邻子集中最近的k个决策近邻。这种双层结构的加速方式,压缩待测样本的决策近邻规模,提高效率。实验结果表明,KNN_DL方法能够获得较高的样本预测速度和较好的预测准确率。展开更多
Collaborative representation-based classification(CRC) is a distance based method, and it obtains the original contributions from all samples to solve the sparse representation coefficient. We find out that it helps t...Collaborative representation-based classification(CRC) is a distance based method, and it obtains the original contributions from all samples to solve the sparse representation coefficient. We find out that it helps to enhance the discrimination in classification by integrating other distance based features and/or adding signal preprocessing to the original samples. In this paper, we propose an improved version of the CRC method which uses the Gabor wavelet transformation to preprocess the samples and also adapts the nearest neighbor(NN)features, and hence we call it GNN-CRC. Firstly, Gabor wavelet transformation is applied to minimize the effects from the background in face images and build Gabor features into the input data. Secondly, the distances solved by NN and CRC are fused together to obtain a more discriminative classification. Extensive experiments are conducted to evaluate the proposed method for face recognition with different instantiations. The experimental results illustrate that our method outperforms the naive CRC as well as some other state-of-the-art algorithms.展开更多
文章针对传统K-近邻分类方法学习效率低下的问题,提出一种基于并行计算的加速K-近邻分类方法(K-nearest neighbor classification method based on parallel computing,PKNN),即并行K-近邻分类.该方法首先将所需要分类的样本划分为不同...文章针对传统K-近邻分类方法学习效率低下的问题,提出一种基于并行计算的加速K-近邻分类方法(K-nearest neighbor classification method based on parallel computing,PKNN),即并行K-近邻分类.该方法首先将所需要分类的样本划分为不同的工作子集,然后在每个子集上进行并行的K-近邻分类.由于划分后每个工作子集的规模均远小于整个数据集的规模,因此降低了分类算法的复杂度,可有效处理大规模数据的分类问题.实验结果表明,PK-NN方法能提高分类效率.展开更多
文摘在传统K-NN分类中,对于每个待测样本均需计算并寻找k个决策近邻,分类效率较低。针对该问题,提出一种双层结构的加速K-NN分类(K-NN classification based on double-layer structure,KNN_DL)方法。将正类和负类样本分别划分为多个不同子集,计算每个子集的中心和半径。当新样本进入时,选择k个决策近邻子集,若其具有相同的类别标签,将该样本标记为相应类别;反之,选择决策近邻子集中最近的k个决策近邻。这种双层结构的加速方式,压缩待测样本的决策近邻规模,提高效率。实验结果表明,KNN_DL方法能够获得较高的样本预测速度和较好的预测准确率。
基金the National Natural Science Foundation of China(No.61502208)the Natural Science Foundation of Jiangsu Province of China(No.BK20150522)+1 种基金the Scientific and Technical Program of City of Huizhou(Nos.2016X0422037 and 2017C0405021)the Natural Science Foundation of Huizhou University(Nos.hzux1201606 and hzu201701)
文摘Collaborative representation-based classification(CRC) is a distance based method, and it obtains the original contributions from all samples to solve the sparse representation coefficient. We find out that it helps to enhance the discrimination in classification by integrating other distance based features and/or adding signal preprocessing to the original samples. In this paper, we propose an improved version of the CRC method which uses the Gabor wavelet transformation to preprocess the samples and also adapts the nearest neighbor(NN)features, and hence we call it GNN-CRC. Firstly, Gabor wavelet transformation is applied to minimize the effects from the background in face images and build Gabor features into the input data. Secondly, the distances solved by NN and CRC are fused together to obtain a more discriminative classification. Extensive experiments are conducted to evaluate the proposed method for face recognition with different instantiations. The experimental results illustrate that our method outperforms the naive CRC as well as some other state-of-the-art algorithms.
文摘实际生活中,经常会遇到大规模数据的分类问题,传统k-近邻k-NN(k-Nearest Neighbor)分类方法需要遍历整个训练样本集,因此分类效率较低,无法处理具有大规模训练集的分类任务。针对这个问题,提出一种基于聚类的加速k-NN分类方法 C_kNN(Speeding k-NN Classification Method Based on Clustering)。该方法首先对训练样本进行聚类,得到初始聚类结果,并计算每个类的聚类中心,选择与聚类中心相似度最高的训练样本构成新的训练样本集,然后针对每个测试样本,计算新训练样本集中与其相似度最高的k个样本,并选择该k个近邻样本中最多的类别标签作为该测试样本的预测模式类别。实验结果表明,C_k-NN分类方法在保持较高分类精度的同时大幅度提高模型的分类效率。
文摘文章针对传统K-近邻分类方法学习效率低下的问题,提出一种基于并行计算的加速K-近邻分类方法(K-nearest neighbor classification method based on parallel computing,PKNN),即并行K-近邻分类.该方法首先将所需要分类的样本划分为不同的工作子集,然后在每个子集上进行并行的K-近邻分类.由于划分后每个工作子集的规模均远小于整个数据集的规模,因此降低了分类算法的复杂度,可有效处理大规模数据的分类问题.实验结果表明,PK-NN方法能提高分类效率.