摘要
本文提出了一种基于扩散模型自监督表征学习的医学图像分类方法MicDiffRep (Medical Image Classification with Diffusion-based Representation)。通过扩散模型预训练,学习医学图像完整的细节纹理信息和图像整体结构,从而在进行医学图像分类时充分捕捉图像的细节特征。为了同时利用图像的全局信息,本文提出一个多尺度的特征聚合MSFA (Multi-Scale Feature Aggregation)模块,将MicDiffRep模型不同尺度的各层特征聚合起来。在脑瘤图像分类数据集上的实验显示,本文方法相比于现有最优的自监督方法的线性分类准确率提升多达6个百分点。
This paper proposes a self-supervised representation learning method for medical image classification, MicDiffRep (Medical Image Classification with Diffusion-based Representation). Through diffusion model pre-training, the complete detailed texture information of medical images and the overall structure of the image are learned, so as to fully capture the detailed features of the image when classifying medical images. In order to utilize the global information of the image at the same time, this paper proposes a multi-scale feature aggregation MSFA (Multi-Scale Feature Aggregation) module to aggregate the features of each layer of the MicDiffRep model at different scales. Experiments on a brain tumor image classification data set show that the linear classification accuracy of this method is improved by up to 6 percentage points compared with the existing best self-supervised methods.
出处
《计算机科学与应用》
2024年第4期133-140,共8页
Computer Science and Application