期刊文献+

SimCLIC:A Simple Framework for Contrastive Learning of Image Classification 被引量:2

原文传递
导出
摘要 Contrastive learning,a self-supervised learning method,is widely used in image representation learning.The core idea is to close the distance between positive sample pairs and increase the distance between negative sample pairs in the representation space.Siamese networks are the most common structure among various current contrastive learning models.However,contrastive learning using positive and negative sample pairs on large datasets is computationally expensive.In addition,there are cases where positive samples are mislabeled as negative samples.Contrastive learning without negative sample pairs can still learn good representations.In this paper,we propose a simple framework for contrastive learning of image classification(SimCLIC).SimCLIC simplifies the Siamese network and is able to learn the representation of an image without negative sample pairs and momentum encoders.It is mainly by perturbing the image representation generated by the encoder to generate different contrastive views.We apply three representation perturbation methods,namely,history representation,representation dropoput,and representation noise.We conducted experiments on several benchmark datasets to compare with current popular models,using image classification accuracy as a measure,and the results show that our SimCLIC is competitive.Finally,we did ablation experiments to verify the effect of different hyperparameters and structures on the model effectiveness.
作者 Han YANG Jun LI
出处 《Journal of Systems Science and Information》 CSCD 2023年第2期204-218,共15页 系统科学与信息学报(英文)
  • 相关文献

同被引文献8

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部