期刊文献+

进化网络模型:无先验知识的自适应自监督持续学习

EvolveNet:Adaptive Self-Supervised Continual Learning without Prior Knowledge
下载PDF
导出
摘要 无监督持续学习(UCL)是指能够随着时间的推移而学习,同时在没有监督的情况下记住以前的模式。虽然在这个方向上取得了很大进展,但现有工作通常假设对于即将到来的数据有强大的先验知识(例如,知道类别边界),而在复杂和不可预测的开放环境中可能无法获得这些知识。受到现实场景的启发,该文提出一个更实际的问题设置,称为无先验知识的在线自监督持续学习。所提设置具有挑战性,因为数据是非独立同分布的,且缺乏外部监督、没有先验知识。为了解决这些挑战,该文提出一种进化网络模型(英文名EvolveNet),它是一种无先验知识的自适应自监督持续学习方法,能够纯粹地从数据连续体中提取和记忆表示。EvolveNet围绕3个主要组件设计:对抗伪监督学习损失、自监督遗忘损失和在线记忆更新,以进行均匀子集选择。这3个组件的设计旨在协同工作,以最大化学习性能。该文在5个公开数据集上对EvolveNet进行了全面实验。结果显示,在所有设置中,EvolveNet优于现有算法,在CIFAR-10,CIFAR-100和TinyImageNet数据集上的准确率显著提高,同时在针对增量学习的多模态数据集Core-50和iLab-20M上也表现最佳。该文还进行了跨数据集的泛化实验,结果显示Evolve-Net在泛化方面更加稳健。最后,在Github上开源了EvolveNet模型和核心代码,促进了无监督持续学习的进展,并为研究社区提供了有用的工具和平台。 Unsupervised Continual Learning(UCL)refers to the ability to learn over time while remembering previous patterns without supervision.Although significant progress has been made in this direction,existing works often assume strong prior knowledge about forthcoming data(e.g.,knowing class boundaries),which may not be obtainable in complex and unpredictable open environments.Inspired by real-world scenarios,a more practical problem setting called unsupervised online continual learning without prior knowledge is proposed in this paper.The proposed setting is challenging because the data are non-i.i.d.and lack external supervision or prior knowledge.To address these challenges,a method called EvolveNet is intriduced,which is an adaptive unsupervised continual learning approach capable of purely extracting and memorizing representations from data streams.EvolveNet is designed around three main components:adversarial pseudo-supervised learning loss,self-supervised forgetting loss,and online memory update for uniform subset selection.The design of these three components aims to synergize and maximize learning performance.We conduct comprehensive experiments on five public datasets with EvolveNet.The results show that EvolveNet outperforms existing algorithms in all settings,achieving significantly improved accuracy on CIFAR-10,CIFAR-100,and TinyImageNet datasets,as well as performing best on the multimodal datasets Core-50 and iLab-20M for incremental learning.The cross-dataset generalization experiments are also conducted,demonstrating EvolveNet’s robustness in generalization.Finally,we open-source the EvolveNet model and core code on GitHub,facilitating progress in unsupervised continual learning and providing a useful tool and platform for the research community.
作者 刘壮 宋祥瑞 赵斯桓 施雅 杨登封 LIU Zhuang;SONG Xiangrui;ZHAO Sihuan;SHI Ya;YANG Dengfeng(School of Fintech,Dongbei University of Finance and Economics,Dalian 116025,China;Department of Computer Science,Dalian University of Technology,Dalian 116024,China)
出处 《电子与信息学报》 EI CAS CSCD 北大核心 2024年第8期3256-3266,共11页 Journal of Electronics & Information Technology
基金 国家自然科学基金(72272028)。
关键词 无监督持续学习 自适应自监督 泛化能力 增量学习 进化网络模型 Unsupervised Continual Learning(UCL) Adaptive self-supervised Generalization Incremental learning EvolveNet
  • 相关文献

参考文献2

二级参考文献2

共引文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部