期刊文献+

基于多头卷积和差分自注意力的小样本故障诊断方法 被引量:3

Small-Sample Fault Diagnosis Method Based on Multi-Head Convolution and Differential Self-Attention
下载PDF
导出
摘要 轴承是工业设备中使用最广泛的旋转部件之一,如果轴承在故障状况下运行较长时间,将会造成巨大的经济损失并威胁人身安全,因此,对轴承故障诊断进行研究具有十分重要的意义。基于深度学习的故障诊断技术目前日趋成熟,但在小样本情况下存在过拟合、效果不稳定、准确率不高等问题。为了解决这类问题,文中提出了一种融合多头卷积(Multi-Head Convolution,MC)的数据嵌入新算法和差分自注意力(Differential Self-Attention,DSA)机制的Transformer变种模型MDT(Multi-Head Convolu⁃tion and Differential Self-Attention Transformer),以实现端到端的小样本故障诊断。MC算法对样本进行多路径一维卷积,由多通道输出将样本从一维扩展到二维,通过多个卷积核尺寸提取出原样本中各个频域的丰富故障信息。相较于Transformer中原有的点积自注意力机制,DSA机制通过差分为每个特征求得对应的注意力权重向量,从而可从样本中提取出更为深层次的故障特征。MDT继承了Transformer对于处理序列数据的强大能力,可从时域信号中提取更为丰富的故障信息,同时避免了小样本模型中常见的过拟合问题。实验结果表明,该方法在每个故障种类仅有100个训练样本的轴承故障诊断任务中能稳定获得99%以上的测试准确率,具有强抗过拟合性和强鲁棒性。 Bearing is one of the most widely used rotating parts in industrial equipment.If the bearing runs in fault condition for a long time,it will cause huge economic loss and threaten personal safety,so that the investigation of bearing fault diagnosis is of great significance.Fault diagnosis technology based on deep learning is becoming more and more mature,but there are problems such as over-fitting,unstable effect and low accuracy in the case of small samples.In order to solve these problems,this paper proposes a Transformer variant model MDT(Multi-Head Convolution and Differential Self-Attention Transformer)to realize end-to-end few-shot fault diagnosis.This model combines the new data embedding algorithm of MC(Multi-Head Convolution)and the DSA(Differential Self-Attention)mechanism.The MC algorithm performs multi-path one-dimension convolution on the sample,extends the sample from one dimension to two dimensions by multi-channel output,and extracts rich fault information in each frequency domain in the original sample through multiple convolution kernel sizes.As compared with the original dot product self-attention in Transformer,the DSA mechanism obtains the corresponding attention weight vector for each feature through the difference,so as to extract deeper fault features from the sample.MDT inherits the powerful ability of Transformer to process sequence data,which can extract richer fault information from time-domain signals and avoid the overfitting problem common in small-sample models.Experimental results show that the proposed method can stably obtain more than 99%test accuracy in the bearing fault diagnosis task with only 100 training samples per fault type,and has strong anti-overfitting ability and strong robustness.
作者 陈新度 扶治森 吴智恒 陈启愉 郭伟科 CHEN Xindu;FU Zhisen;WU Zhiheng;CHEN Qiyu;GUO Weike(School of Mechanical and Electrical Engineering,Guangdong University of Technology,Guangzhou 510006,Guangdong,China;Intelligent Manufacturing Research Institute,Guangdong Academy of Sciences,Guangzhou 510030,Guangdong,China;Guangdong Provincial Key Laboratory of Modern Control Technology,Guangzhou 510030,Guangdong,China)
出处 《华南理工大学学报(自然科学版)》 EI CAS CSCD 北大核心 2023年第7期21-33,共13页 Journal of South China University of Technology(Natural Science Edition)
基金 广东省重点领域研发计划项目(2019B090917004,2020B0101320002) 广州市重点研发计划项目(202206030006) 广州市黄埔区国际科技合作项目(2021GH13)。
关键词 多头卷积 差分自注意力 Transformer变种 小样本 故障诊断 multi-head convolution differential self-attention Transformer variant small sample fault diagnosis
  • 相关文献

同被引文献26

引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部