期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Self-Supervised Entity Alignment Based on Multi-Modal Contrastive Learning
1
作者 Bo Liu Ruoyi Song +3 位作者 Yuejia Xiang Junbo Du Weijian Ruan jinhui hu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第11期2031-2033,共3页
Dear Editor,This letter proposes an unsupervised entity alignment method,which realizes integration of multiple multi-modal knowledge graphs adaptively.In recent years,Large-scale multi-modal knowledge graphs(LMKGs),c... Dear Editor,This letter proposes an unsupervised entity alignment method,which realizes integration of multiple multi-modal knowledge graphs adaptively.In recent years,Large-scale multi-modal knowledge graphs(LMKGs),containing text and image,have been widely applied in numerous knowledge-driven topics,such as question answering,entity linking,information extraction,reasoning and recommendation. 展开更多
关键词 MODAL LINKING LETTER
下载PDF
A distributed stochastic optimization algorithm with gradient-tracking and distributed heavy-ball acceleration
2
作者 Bihao SUN jinhui hu +1 位作者 Dawen XIA huaqing LI 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2021年第11期1463-1476,共14页
Distributed optimization has been well developed in recent years due to its wide applications in machine learning and signal processing.In this paper,we focus on investigating distributed optimization to minimize a gl... Distributed optimization has been well developed in recent years due to its wide applications in machine learning and signal processing.In this paper,we focus on investigating distributed optimization to minimize a global objective.The objective is a sum of smooth and strongly convex local cost functions which are distributed over an undirected network of n nodes.In contrast to existing works,we apply a distributed heavy-ball term to improve the convergence performance of the proposed algorithm.To accelerate the convergence of existing distributed stochastic first-order gradient methods,a momentum term is combined with a gradient-tracking technique.It is shown that the proposed algorithm has better acceleration ability than GT-SAGA without increasing the complexity.Extensive experiments on real-world datasets verify the effectiveness and correctness of the proposed algorithm. 展开更多
关键词 Distributed optimization High-performance algorithm Multi-agent system Machine-learning problem Stochastic gradient
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部