期刊文献+

A survey on membership inference attacks and defenses in machine learning

原文传递
导出
摘要 Membership inference(MI)attacks mainly aim to infer whether a data record was used to train a target model or not.Due to the serious privacy risks,MI attacks have been attracting a tremendous amount of attention in the research community.One existing work conducted-to our best knowledge the first dedicated survey study in this specific area:The survey provides a comprehensive review of the literature during the period of 2017~2021(e.g.,over 100 papers).However,due to the tremendous amount of progress(i.e.,176 papers)made in this area since 2021,the survey conducted by the one existing work has unfortunately already become very limited in the following two aspects:(1)Although the entire literature from 2017~2021 covers 18 ways to categorize(all the proposed)MI attacks,the literature during the period of 2017~2021,which was reviewed in the one existing work,only covered 5 ways to categorize MI attacks.With 13 ways missing,the survey conducted by the one existing work only covers 27%of the landscape(in terms of how to categorize MI attacks)if a retrospective view is taken.(2)Since the literature during the period of 2017~2021 only covers 27%of the landscape(in terms of how to categorize),the number of new insights(i.e.,why an MI attack could succeed)behind all the proposed MI attacks has been significantly increasing since year 2021.As a result,although none of the previous work has made the insights as a main focus of their studies,we found that the various insights leveraged in the literature can be broken down into 10 groups.Without making the insights as a main focus,a survey study could fail to help researchers gain adequate intellectual depth in this area of research.In this work,we conduct a systematic study to address these limitations.In particular,in order to address the first limitation,we make the 13 newly emerged ways to categorize MI attacks as a main focus on the study.In order to address the second limitation,we provide-to our best knowledge-the first review of the various insights leveraged in the entire literature.We found that the various insights leveraged in the literature can be broken down into 10 groups.Moreover,our survey also provides a comprehensive review of the existing defenses against MI attacks,the existing applications of MI attacks,the widely used datasets(e.g.,107 new datasets),and the eva luation metrics(e.g.,20 new evaluation metrics).
出处 《Journal of Information and Intelligence》 2024年第5期404-454,共51页 信息与智能学报(英文)
基金 supported by National Natural Science Foundation of China(61941105,61772406,and U2336203) National Key Research and Development Program of China(2023QY1202) Beijing Natural Science Foundation(4242031).
  • 相关文献

参考文献3

二级参考文献8

  • 1Odri S V, Petrovacki D P, Krstonosic G A. Evolutional development of a multilevel neural network[J]. Neural Networks, 1993,6(4) :583-595.
  • 2Yao X, Liu Y. A new evolutionary system for evolving artificial neural networks[J]. IEEE. Tran. Neural Networks, 1997(8) : 694-713.
  • 3Barse E L, Kvarnstrom H, Jonsson K Combining fraud and intrusion detection-meeting new requirements[C]// Proc. Nordic Workshop. Secure IT systems. Ireland: Cork, 2000.
  • 4Barse E L, Kvamstrom H, Jonsson E. A synthetic fraud data generation methodology[C]// Lecture Notes in Computer Science, ICICS 2002, Laboratories for Information Technology, Singapore, Springer Verlag,2002.
  • 5Margare H Dunham. Data mining introductory and advanced topics[M]. Boston: Prentice Hall, 2003.
  • 6Yao X. A review of evolutionary artificial neural net- works[J]. Int. J. Intell. Syst., 1993,8(4):539-567.
  • 7黄聪会,陈靖,龚水清,罗樵,朱清超.一种基于危险理论的恶意代码检测方法[J].中南大学学报(自然科学版),2014,45(9):3055-3060. 被引量:4
  • 8Yanyu HUANG,Siyi LV,Zheli LIU,Xiangfu SONG,Jin LI,Yali YUAN,Changyu Dong.Cetus:an efficient symmetric searchable encryption against file-injection attack with SGX[J].Science China(Information Sciences),2021,64(8):191-208. 被引量:4

共引文献22

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部