期刊文献+

基于Vision Transformer的有学习的侧信道攻击模型

Learning Side Channel Attack Model Based on Vision Transformer
下载PDF
导出
摘要 在侧信道攻击中,任何防御对策的目标都是使减弱能量消耗与设备所执行密码算法的中间值的关系。加掩方案就是通过随机化密码设备所处理的中间值来达到这个目标。Sbox乱序方案则通过使密码算法执行过程中的Sbox盒的执行顺序,以达到随机化各中间值所对应能量泄露时刻。针对这两类防御对策,目前基于有学习的侧信道攻击模型一般使用多层感知器、卷积神经网络和循环神经网络。本文基于计算机视觉领域的Vision Transformer (ViT)模型提出一种有学习的攻击模型VITSCA。VITSCA模型主要针对自注意机制做了微调,通过引入一个权重向量对输入的样本权重进行记录而非使用查询向量和键值对组合,更有利于攻击模型从大量的能迹样本中筛选出更有用的信息进行攻击。VITSCA模型能减少模型训练的时间以及提高模型的精确度,能有效对经过加掩方案和Sbox乱序的数据集进行攻击。本文引言部分过于简单,缺少对现有文献的综述和分析,同时也未对本文创新性和研究内容进行总结,议广泛阅读文献,对该研究领域的研究现状进行系统综述。 In a side-channel attack, the goal of any defensive countermeasure is to reduce the energy con-sumption in relation to the median value of the cryptographic algorithm performed by the device. Masking schemes achieve this goal by randomizing the intermediate values processed by crypto-graphic devices. The Sbox out-of-order scheme randomizes the corresponding energy leakage time of each intermediate value by making the execution order of Sboxes in the execution process of the cryptographic algorithm. For these two kinds of defense countermeasures, the current learn-ing-based side channel attack model generally uses multi-layer perceptron, convolutional neural network and cyclic neural network. This paper proposes a learning attack model VITSCA based on Vision Transformer (ViT) model in the field of computer vision. VITSCA model is mainly fine-tuned for the self-attention mechanism. By introducing a weight vector to record the input sample weight instead of using the combination of query vector and key-value pair, it is more conducive for the at-tack model to screen out more useful information from a large number of trace samples for attack. VITSCA model can reduce the time of model training and improve the accuracy of the model, and can effectively attack the data set after masking scheme and Sbox out-of-order.
作者 廖杨杰 王燚
出处 《应用数学进展》 2023年第4期1581-1589,共9页 Advances in Applied Mathematics
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部