期刊文献+

终身学习的否定选择算法 被引量:3

Negative selection algorithm for life-long learning
下载PDF
导出
摘要 针对否定选择算法(NSA)仅模拟了人类适应性免疫系统的中枢耐受过程,而未模拟其外周耐受过程,不具备终身学习能力的问题,提出了具有终身学习能力的否定选择算法(LNSA).LN-SA的学习过程模拟淋巴细胞经历中枢耐受和外周耐受的成熟过程,通过外周耐受实现终身学习,从而提高了抗体区分自体与非自体的能力.与NSA和树突细胞算法(DCA)的比较结果表明,LNSA可有效降低检测误报率,并具有与NSA同样高的检测率. In order to solve the problem that the negative selection algorithm(NSA) only simulates the central tolerance process for human adaptive immune system,can't simulate the peripheral tolerance process,and has no capacity of life-long learning,the life-long learning negative selection algorithm(LNSA) was proposed.The learning process of LNSA simulates the maturation process of both central and peripheral tolerances suffered by lymphocyte,and the life-long learning is realized with the peripheral tolerance.Therefore,the capability of distinguishing self and nonself for antibodies gets enhanced.Compared with NSA and dendritic cell algorithm(DCA),the LNSA can effectively reduce the detection false positive rate,and exhibits the high detection rate like NSA.
出处 《沈阳工业大学学报》 EI CAS 北大核心 2012年第3期293-297,共5页 Journal of Shenyang University of Technology
基金 国家技术创新基金资助项目(08C26214411198) 粤港关键领域重点突破基金资助项目(2008A011400010)
关键词 人工免疫系统 恶意代码检测 否定选择算法 树突细胞算法 终身学习 适应性免疫 n元 I/O请求包 artificial immune system malware detection negative selection algorithm dentritic cell algorithm life-long learning adaptive immunity n-gram I/O request packet
  • 相关文献

参考文献17

  • 1Forrest S, Perelson A S, Allen L, et al. Self-nonself discrimination in a computer [ C ]//IEEE Computer Society Symposium on Research in Security and Pri- vacy. Oakland, USA, 1994 : 202 - 212.
  • 2Gonzalez F A, Dasgupta D. Anomaly detection using real-valued negative selection [ J ]. Journal of Genetic Programming and Evolvable Machines, 2003,4 ( 4 ) : 383 - 403.
  • 3Gonzalez F,Dasgupta D,Nino L F,et al. A randomized real-valued negative selection algorithm [ J ]. Lecture Notes in Computer Science,2(X)3 ,2787 :261 -272.
  • 4Zhou J,Dasgupta D. Real-valued negative selection u- sing variable-sized detectors E J ]. Lecture Notes in Computer Science, 2004,3102:287 - 298.
  • 5de Castro L N, von Zuben F J. The clonal selection al- gorithm with engineering application I C ]//Procee- dings of GECCO Workshop on Artificial Immune Sys- tems and Their Applications. Las Vegas, USA, 2000: 36 - 37.
  • 6Kim J,Bentley P J. Towards an artificial immune system for network intrusion detection :an investigation of clonal selection with a negative selection' operator [ C ]//Pro- ceedings of the 2001 Congress on Evolutionary Computa- tion. Seoul, South Korea,2001 : 1244 - 1252.
  • 7Aickelin U, Bentley P, Cayzer S, et al. Danger theory: the link between AIS and IDS? EJ]. Lecture Notes in Computer Science,2003,2787 : 147 - 155.
  • 8Greensmith J, Aickelin U, Cayzer S. Introducing den- dritic cells as a novel immune-inspired algorithm for anomaly detection E J ]. Lecture Notes in Computer Science, 2005,3627 : 153 - 167.
  • 9Matzinger P. Tolerance, danger and the extended family . E J ]. Annual Review of Lmmunology, 1994, 12: 991 - 1045.
  • 10Greensmith J, Aickelin U, Twycross J. Articulation and clarification of the dendritic cell algorithm [ J ]. Lecture Notes in Computer Science,2(136,4163:404-417.

同被引文献39

  • 1赵刚,宫义山,王大力.考虑成本与要素关系的信息安全风险分析模型[J].沈阳工业大学学报,2015,37(1):69-74. 被引量:8
  • 2Lucian B,Robert B,Bart D S.A comprehensive survey of multiagent reinforcement learning[J].IEEE Transactions on Systems,Man,and Cybernetics-Part C:Applications and Reviews,2008,38(2):156-172.
  • 3Shivaram K,Yaxin L,Peter S.Half field offense in robocup soccer:A multiagent reinforcement learning case study[J].Lecture Notes in Computer Science,2007,4434:72-85.
  • 4Littman M L.Markov games as a framework for multiagent learning[C]//Proceedings of the 11th International Conference on Machine Learning,1994:157-163.
  • 5Hu J L,Wellman M P.Nash Q-learning for general-sum stochastic games[J].Journal of Machine Learning Research,2003,4(6):1039-1069.
  • 6Wunder M,Littman M,Babes M.Classes of multiagent Q-learning dynamics with ε-greedy exploration[R].New Jersey:Rutgers University DCS-tr-670,2010.
  • 7Kim H E,Ahn H S.Convergence of multiagent Q-learning:Multi action replay process approach[C]//Proceedings of the IEEE International Symposium on Intelligent Control Part of Multi-Conference on Systems and Control,2010:789-794.
  • 8Jens K,Jan P.Imitation and reinforcement learning practical algorithms for motor primitives in robotics[J].IEEE Robotics and Automation,2010,17(2):55-62.
  • 9Michelle M,Marcus G.Reinforcement learning in first person shooter games[J].IEEE Transactions on Computational Intelligence and AI in Games,2011,3(1):43-56.
  • 10Fan B,Pu J X.Distributed multi-agent reinforcement learning and its application to robot soccer[C]//Proceedings of the International Workshop on Education Technology and Training,2008,1:667-671.

引证文献3

二级引证文献32

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部