摘要
Objective To observe the value of deep learning (DL) models for automatic classification of echocardiographic views. Methods Totally 100 patients after heart transplantation were retrospectively enrolled and divided into training set, validation set and test set at a ratio of 7 ∶ 2 ∶ 1. ResNet18, ResNet34, Swin Transformer and Swin Transformer V2 models were established based on 2D apical two chamber view, 2D apical three chamber view, 2D apical four chamber view, 2D subcostal view, parasternal long-axis view of left ventricle, short-axis view of great arteries, short-axis view of apex of left ventricle, short-axis view of papillary muscle of left ventricle, short-axis view of mitral valve of left ventricle, also 3D and CDFI views of echocardiography. The accuracy, precision, recall, F1 score and confusion matrix were used to evaluate the performance of each model for automatically classifying echocardiographic views. The interactive interface was designed based on Qt Designer software and deployed on the desktop. Results The performance of models for automatically classifying echocardiographic views in test set were all good, with relatively poor performance for 2D short-axis view of left ventricle and superior performance for 3D and CDFI views. Swin Transformer V2 was the optimal model for automatically classifying echocardiographic views, with high accuracy, precision, recall and F1 score was 92.56%, 89.01%, 89.97% and 89.31%, respectively, which also had the highest diagonal value in confusion matrix and showed the best classification effect on various views in t-SNE figure. Conclusion DL model had good performance for automatically classifying echocardiographic views, especially Swin Transformer V2 model had the best performance. Using interactive classification interface could improve the interpretability of prediction results to some extent.
目的 观察深度学习(DL)模型用于自动分类超声心动图切面的价值。方法 回顾性分析100例心脏移植术后患者,按7∶2∶1比例将其分为训练集、验证集及测试集;基于2D心尖二腔、心尖三腔、心尖四腔、剑突下、胸骨旁左心室长轴、大动脉短轴、心尖水平左心室短轴、乳头肌水平左心室短轴、二尖瓣水平左心室短轴、3D及CDFI切面超声心动图分别构建ResNet18、ResNet34、Swin Transformer及Swin Transformer V2模型;以准确率、精确率、召回率、F1分数及混淆矩阵评估各模型自动分类超声心动图切面的效能;以Qt Designer软件设计交互式界面并部署于桌面端。结果 各模型自动分类测试集超声心动图切面效能均良好,分类左心室短轴切面效能有待提升,分类3D及CDFI切面效能优越。其中Swin Transformer V2为最优模型,其准确率、精确率、召回率及F1分数均较高(分别为92.56%、89.01%、89.97%及89.31%),在混淆矩阵中对角线值最大,且t-SNE图显示各切面分类效果最好。结论 基于DL模型自动分类超声心动图切面效能良好,尤以Swin Transformer V2模型效能最佳;交互式分类界面可进一步提高预测结果的可解释性。
出处
《中国医学影像技术》
CSCD
北大核心
2024年第8期1124-1129,共6页
Chinese Journal of Medical Imaging Technology
基金
国家重点研发计划(2022YFF0706504)
国家自然科学基金(82230066、82371991、82302226、82151316)。