期刊文献+

面向移动终端智能的自治学习系统 被引量:6

Autonomous Learning System Towards Mobile Intelligence
下载PDF
导出
摘要 在移动终端设备中部署机器学习模型已成为学术界和产业界的研究热点,其中重要的一环是利用用户数据训练生成模型.然而,由于数据隐私日益得到重视,特别是随着欧洲出台GDPR、我国出台《个人信息保护法》等相关法律法规,导致开发者不能任意从用户设备中获取训练数据(特别是隐私数据),从而无法保证模型训练的质量.国内外学者针对如何在隐私数据上训练神经网络模型展开了一系列研究,对其进行了总结并指出其相应的局限性.为此,提出了一种新型的面向移动终端隐私数据的机器学习模型训练模式,将所有与用户隐私数据相关的计算任务都部署在本地终端设备,无需用户以任何形式上传数据,从而保护用户隐私.这种训练模式被为自治式学习(autonomous learning).为了解决自治式学习面临的移动终端数据量不足与计算能力不足两大挑战,设计实现了自治学习系统AutLearn,通过云(公共数据,预训练)和端(隐私数据,迁移学习)协同的思想,以及终端数据增强技术,提高了终端设备上模型的训练效果.进一步地,通过模型压缩、神经网络编译器优化、运行时缓存等一系列技术,AutLearn可以极大地优化移动终端上的模型训练计算开销.基于AutLearn在两个经典的神经网络应用场景下实现了自治式学习,实验结果表明,AutLearn可以在保护隐私数据的前提下,训练模型达到甚至超过传统的集中式/联邦式模式,并且极大地减小了在移动终端上进行模型训练的计算和能耗开销. How to efficiently deploy machine learning models on mobile devices has drawn a lot of attention in both academia and industry,among which the model training is a critical part.However,with increasingly public attention on data privacy and the recently adopted laws and regulations,it becomes harder for developers to collect training data from users and thus cannot train high-quality models.Researchers have been exploring approaches of training neural networks on decentralized data.Those efforts will be summarized and their limitations be pointed out.To this end,this work presents a novel neural network training paradigm on mobile devices,which distributes all training computations associated with private data on local devices and requires no data to be uploaded in any form.Such training paradigm autonomous learning is named.To deal with two main challenges of autonomous learning,i.e.,limited data volume and insufficient computing power available on mobile devices,the first autonomous learning system AutLearn is designed and implemented.It incorporates the cloud(public data,pre-training)—client(private data,transfer learning)cooperation methodology and data augmentation techniques to ensure the model convergence on mobile devices.Furthermore,by utilizing a series of optimization techniques such as model compression,neural network compiler,and runtime cache reuse,AutLearn can significantly reduce the on-client training cost.Two classical scenarios of autonomous learning are implemented based on AutLearn and carried out a set of experiments.The results showed that AutLearn can train the neural networks with comparable or even higher accuracy compared to traditional centralized/federated training mode with privacy preserved.AutLearn can also significantly reduce the computational and energy cost of neural network training on mobile devices.
作者 徐梦炜 刘渊强 黄康 刘譞哲 黄罡 XU Meng-Wei;LIU Yuan-Qiang;HUANG Kang;LIU Xuan-Zhe;HUANG Gang(Institute of Software,School of Electronics Engineering and Computer Science,Peking University,Beijing 100871,China;Key Laboratory of High Confidence Software Technologies of Ministry of Education(Peking University),Beijing 100871,China;Linggui Tech,Beijing 100094,China)
出处 《软件学报》 EI CSCD 北大核心 2020年第10期3004-3018,共15页 Journal of Software
基金 国家杰出青年科学基金(61725201) 广东省重点领域研发计划(2020B010164002)。
关键词 机器学习 移动计算 边缘计算 分布式系统 machine learning mobile computing edge computing distributed system
  • 相关文献

参考文献3

二级参考文献88

  • 1Ben-David S,Blitzer J,Crammer K,Pereira F.Analysis of representations for domain adaptation.In:Platt JC,Koller D,Singer Y,Roweis ST,eds.Proc.of the Advances in Neural Information Processing Systems 19.Cambridge:MIT Press,2007.137-144.
  • 2Blitzer J,McDonald R,Pereira F.Domain adaptation with structural correspondence learning.In:Jurafsky D,Gaussier E,eds.Proc.of the Int’l Conf.on Empirical Methods in Natural Language Processing.Stroudsburg PA:ACL,2006.120-128.
  • 3Dai WY,Xue GR,Yang Q,Yu Y.Co-Clustering based classification for out-of-domain documents.In:Proc.of the 13th ACM Int’l Conf.on Knowledge Discovery and Data Mining.New York:ACM Press,2007.210-219.[doi:10.1145/1281192.1281218].
  • 4Dai WY,Xue GR,Yang Q,Yu Y.Transferring naive Bayes classifiers for text classification.In:Proc.of the 22nd Conf.on Artificial Intelligence.AAAI Press,2007.540-545.
  • 5Liao XJ,Xue Y,Carin L.Logistic regression with an auxiliary data source.In:Proc.of the 22nd lnt*I Conf.on Machine Learning.San Francisco:Morgan Kaufmann Publishers,2005.505-512.[doi:10.1145/1102351.1102415].
  • 6Xing DK,Dai WY,Xue GR,Yu Y.Bridged refinement for transfer learning.In:Proc.of the Ilth European Conf.on Practice of Knowledge Discovery in Databases.Berlin:Springer-Verlag,2007.324-335.[doi:10.1007/978-3-540-74976-9_31].
  • 7Mahmud MMH.On universal transfer learning.In:Proc.of the 18th Int’l Conf.on Algorithmic Learning Theory.Sendai,2007.135-149.[doi:10,1007/978-3-540-75225-7_14].
  • 8Samarth S,Sylvian R.Cross domain knowledge transfer using structured representations.In:Proc.of the 21st Conf.on Artificial Intelligence.AAAI Press,2006.506-511.
  • 9Bel N,Koster CHA,Villegas M.Cross-Lingual text categorization.In:Proc.of the European Conf.on Digital Libraries.Berlin:Springer-Verlag,2003.126-139.[doi:10.1007/978-3-540-45175-4_13].
  • 10Zhai CX,Velivelli A,Yu B.A cross-collection mixture model for comparative text mining.In:Proc.of the 10th ACM SIGKDD Int’l Conf.on Knowledge Discovery and Data Mining.New York:ACM,2004.743-748.[doi:10.1145/1014052.1014150].

共引文献681

同被引文献49

引证文献6

二级引证文献15

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部