Objective To observe the value of deep learning (DL) models for automatic classification of echocardiographic views. Methods Totally 100 patients after heart transplantation were retrospectively enrolled and divided i...Objective To observe the value of deep learning (DL) models for automatic classification of echocardiographic views. Methods Totally 100 patients after heart transplantation were retrospectively enrolled and divided into training set, validation set and test set at a ratio of 7 ∶ 2 ∶ 1. ResNet18, ResNet34, Swin Transformer and Swin Transformer V2 models were established based on 2D apical two chamber view, 2D apical three chamber view, 2D apical four chamber view, 2D subcostal view, parasternal long-axis view of left ventricle, short-axis view of great arteries, short-axis view of apex of left ventricle, short-axis view of papillary muscle of left ventricle, short-axis view of mitral valve of left ventricle, also 3D and CDFI views of echocardiography. The accuracy, precision, recall, F1 score and confusion matrix were used to evaluate the performance of each model for automatically classifying echocardiographic views. The interactive interface was designed based on Qt Designer software and deployed on the desktop. Results The performance of models for automatically classifying echocardiographic views in test set were all good, with relatively poor performance for 2D short-axis view of left ventricle and superior performance for 3D and CDFI views. Swin Transformer V2 was the optimal model for automatically classifying echocardiographic views, with high accuracy, precision, recall and F1 score was 92.56%, 89.01%, 89.97% and 89.31%, respectively, which also had the highest diagonal value in confusion matrix and showed the best classification effect on various views in t-SNE figure. Conclusion DL model had good performance for automatically classifying echocardiographic views, especially Swin Transformer V2 model had the best performance. Using interactive classification interface could improve the interpretability of prediction results to some extent.展开更多
文摘Objective To observe the value of deep learning (DL) models for automatic classification of echocardiographic views. Methods Totally 100 patients after heart transplantation were retrospectively enrolled and divided into training set, validation set and test set at a ratio of 7 ∶ 2 ∶ 1. ResNet18, ResNet34, Swin Transformer and Swin Transformer V2 models were established based on 2D apical two chamber view, 2D apical three chamber view, 2D apical four chamber view, 2D subcostal view, parasternal long-axis view of left ventricle, short-axis view of great arteries, short-axis view of apex of left ventricle, short-axis view of papillary muscle of left ventricle, short-axis view of mitral valve of left ventricle, also 3D and CDFI views of echocardiography. The accuracy, precision, recall, F1 score and confusion matrix were used to evaluate the performance of each model for automatically classifying echocardiographic views. The interactive interface was designed based on Qt Designer software and deployed on the desktop. Results The performance of models for automatically classifying echocardiographic views in test set were all good, with relatively poor performance for 2D short-axis view of left ventricle and superior performance for 3D and CDFI views. Swin Transformer V2 was the optimal model for automatically classifying echocardiographic views, with high accuracy, precision, recall and F1 score was 92.56%, 89.01%, 89.97% and 89.31%, respectively, which also had the highest diagonal value in confusion matrix and showed the best classification effect on various views in t-SNE figure. Conclusion DL model had good performance for automatically classifying echocardiographic views, especially Swin Transformer V2 model had the best performance. Using interactive classification interface could improve the interpretability of prediction results to some extent.