摘要
零样本学习的关键在于将可见类训练集中学到的样本、属性之间的关联迁移到未见类上。但由于只有可见类的样本来参与训练,往往只有可见类关联较强的属性,在训练中其连接权重不断被强化,而一些未见类侧重的属性,则相当于在训练中缺少足够的正样本。这使得模型在测试阶段不能很好地表达和区分未见类的样本。为限制模型在可见类强关联的属性上的权重,提出一套基于属性平衡正则的端到端零样本学习框架。该框架实现简单、方便扩展。在三个主流的属性数据集的零样本学习和泛化零样本学习任务上,均取得了当前最好的结果。
The key to zero-shot learning is to transfer the associations between samples and attributes learned in the training set on seen classes to unseen classes. However, since only the samples of the visible classes participate in the training, there are often only attributes of visible classes with strong associations. In the training, the connection weights are continuously strengthened, and some attributes on unseen classes lack sufficient positive samples. This makes the trained model fail to represent and distinguish samples on the unseen classes in testing phase. In order to limit the weights of the model on the attributes of visible classes with strong association, we proposed an end-to-end zero-shot learning framework based on attribute balancing regularization. The approach is easy to be implemented and extended, and achieves the best results in both zero-shot learning and generalized zero-shot learning tasks on there mainstream attribute datasets.
作者
吴凡
王康
Wu Fan;Wang Kang(Shanghai Key Lab of Intelligent Information Processing,School of Computer Science,Fudan University,Shanghai 200433,China)
出处
《计算机应用与软件》
北大核心
2018年第10期165-170,共6页
Computer Applications and Software
关键词
零样本学习
深度学习
属性平衡
端到端训练
Zero-shot learning
Deep learning
Attribute balancing
End-to-end training