摘要
可解释性已成为第三代人工智能的显著特征之一,尤其是反人工智能技术的出现,迫使业界重新思考人工智能的未来。文中结合电子战面临的现状与挑战,探讨了黑箱模型的困惑、先验知识的缺位和安全与可信问题,并结合可解释序列特征建模与识别、可解释特征提取与模式识别、基于知识图谱的子图匹配辨识等初步实践与探索,给出了可解释智能应用思考与建议,旨在为电子战装备的智能化发展提供技术参考。
Interpretability has become one of the distinguishing features of the third generation of AI,especially the emergence of anti-AI technology,forcing the industry to rethink the future of AI.Combined with the present situation and challenges of electronic warfare,discusses the black box model of confusion,the absence of prior knowledge and security and credible problems,and combined with the explanatory sequence feature modeling and identification,explanatory feature extraction and pattern recognition,based on knowledge graph matching identification preliminary practice and exploration,can explain the intelligent application thinking and Suggestions,aims to provide technical reference for the intelligent development of electronic warfare equipment.
作者
旷生玉
李高云
李福林
张伟
张谦
KUANG Sheng-yu;LI Gao-yun;LI Fu-lin;ZHANG Wei;ZHANG Qian(The 29^(th)Research Institute of CETC,Chengdu 610036,China)
出处
《中国电子科学研究院学报》
北大核心
2023年第8期697-701,共5页
Journal of China Academy of Electronics and Information Technology