基于随机梯度下降法,提出了在线支持张量机(online support tensor machine,OSTM)算法。该算法的学习数据是张量模式,并以序列方式获取。算法利用张量秩一分解来代替原始张量辅助内积运算,不仅保持了原始张量的自然结构信息和关系,也极...基于随机梯度下降法,提出了在线支持张量机(online support tensor machine,OSTM)算法。该算法的学习数据是张量模式,并以序列方式获取。算法利用张量秩一分解来代替原始张量辅助内积运算,不仅保持了原始张量的自然结构信息和关系,也极大地节省了存储空间和计算时间。在13个张量数据集上的实验表明,与在线支持向量机相比,在拥有可比的测试精度的情况下,在线支持张量机具有更快的训练速度,尤其对于高阶张量,其优越性更明显。展开更多
It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (...It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (HCFLL) based support vector machine(SVM) algorithm is proposed to deal with this problem. Firstly, HCFLL hierarchically dusters a given dataset into a modified clustering feature tree based on the ideas of unsupervised clustering and supervised clustering. Then it locally trains SVM on each labeled subtree at a fixed-layer of the tree. The experimental results show that compared with the existing popular algorithms such as core vector machine and decision.tree support vector machine, HCFLL can significantly improve the training and testing speeds with comparable testing accuracy.展开更多
文摘基于随机梯度下降法,提出了在线支持张量机(online support tensor machine,OSTM)算法。该算法的学习数据是张量模式,并以序列方式获取。算法利用张量秩一分解来代替原始张量辅助内积运算,不仅保持了原始张量的自然结构信息和关系,也极大地节省了存储空间和计算时间。在13个张量数据集上的实验表明,与在线支持向量机相比,在拥有可比的测试精度的情况下,在线支持张量机具有更快的训练速度,尤其对于高阶张量,其优越性更明显。
基金National Natural Science Foundation of China ( No. 61070033 )Fundamental Research Funds for the Central Universities,China( No. 2012ZM0061)
文摘It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (HCFLL) based support vector machine(SVM) algorithm is proposed to deal with this problem. Firstly, HCFLL hierarchically dusters a given dataset into a modified clustering feature tree based on the ideas of unsupervised clustering and supervised clustering. Then it locally trains SVM on each labeled subtree at a fixed-layer of the tree. The experimental results show that compared with the existing popular algorithms such as core vector machine and decision.tree support vector machine, HCFLL can significantly improve the training and testing speeds with comparable testing accuracy.