期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
基于字典优化的迁移稀疏编码方法
1
作者 孟欠欠 沈龙凤 +1 位作者 李晓 李梦雯 《黑龙江工业学院学报(综合版)》 2019年第12期73-78,共6页
传统的编码方法通常对字典采取随机初始化,极大影响图像分类精度。基于此,提出一种基于kmeans的字典优化方法,并将其与迁移稀疏编码相结合。先将图像中每个局部描述子投影到线性子空间,在此空间取距离特征最近的k个特征作为过完备字典,... 传统的编码方法通常对字典采取随机初始化,极大影响图像分类精度。基于此,提出一种基于kmeans的字典优化方法,并将其与迁移稀疏编码相结合。先将图像中每个局部描述子投影到线性子空间,在此空间取距离特征最近的k个特征作为过完备字典,均衡的选择基向量来表达图像;同时考虑了图像的分布差异和局部特征,有效保证编码的稳定性。在三个跨域图像数据集上实验表明,与同类方法相比,该方法能显著提高跨域分类性能。 展开更多
关键词 kmeans特征 字典优化 跨域 迁移稀疏编码
下载PDF
基于双流形正则化的迁移稀疏算法研究
2
作者 孟欠欠 沈龙凤 李梦雯 《重庆科技学院学报(自然科学版)》 CAS 2020年第4期76-80,99,共6页
为了完善跨域图像表示模型、增强跨域图像的分类功能,研究基于双流形正则化项的迁移稀疏表示算法。在迁移稀疏编码的基础上,利用特征间局部流形信息构建拉普拉斯图,将特征拉普拉斯正则化项融入目标函数中。首先,通过k均值聚类均衡选择... 为了完善跨域图像表示模型、增强跨域图像的分类功能,研究基于双流形正则化项的迁移稀疏表示算法。在迁移稀疏编码的基础上,利用特征间局部流形信息构建拉普拉斯图,将特征拉普拉斯正则化项融入目标函数中。首先,通过k均值聚类均衡选择基向量;其次,根据特征间局部流形结构信息构建拉普拉斯图,并将该拉普拉斯图作为正则化项引入到迁移稀疏编码算法的目标函数中;同时,将跨域图像的几何流形结构信息和分布差异信息考虑进去,以保证编码的稳定性和鲁棒性。 展开更多
关键词 K均值 迁移稀疏编码 流形正则化 拉普拉斯图
下载PDF
Transfer learning with deep sparse auto-encoder for speech emotion recognition
3
作者 Liang Zhenlin Liang Ruiyu +3 位作者 Tang Manting Xie Yue Zhao Li Wang Shijia 《Journal of Southeast University(English Edition)》 EI CAS 2019年第2期160-167,共8页
In order to improve the efficiency of speech emotion recognition across corpora,a speech emotion transfer learning method based on the deep sparse auto-encoder is proposed.The algorithm first reconstructs a small amou... In order to improve the efficiency of speech emotion recognition across corpora,a speech emotion transfer learning method based on the deep sparse auto-encoder is proposed.The algorithm first reconstructs a small amount of data in the target domain by training the deep sparse auto-encoder,so that the encoder can learn the low-dimensional structural representation of the target domain data.Then,the source domain data and the target domain data are coded by the trained deep sparse auto-encoder to obtain the reconstruction data of the low-dimensional structural representation close to the target domain.Finally,a part of the reconstructed tagged target domain data is mixed with the reconstructed source domain data to jointly train the classifier.This part of the target domain data is used to guide the source domain data.Experiments on the CASIA,SoutheastLab corpus show that the model recognition rate after a small amount of data transferred reached 89.2%and 72.4%on the DNN.Compared to the training results of the complete original corpus,it only decreased by 2%in the CASIA corpus,and only 3.4%in the SoutheastLab corpus.Experiments show that the algorithm can achieve the effect of labeling all data in the extreme case that the data set has only a small amount of data tagged. 展开更多
关键词 sparse auto-encoder transfer learning speech emotion recognition
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部