In a convolutional neural network (CNN) classification model for diagnosing medical images, transparency and interpretability of the model’s behavior are crucial in addition to high classification accuracy, and it is...In a convolutional neural network (CNN) classification model for diagnosing medical images, transparency and interpretability of the model’s behavior are crucial in addition to high classification accuracy, and it is highly important to explicitly demonstrate them. In this study, we constructed an interpretable CNN-based model for breast density classification using spectral information from mammograms. We evaluated whether the model’s prediction scores provided reliable probability values using a reliability diagram and visualized the basis for the final prediction. In constructing the classification model, we modified ResNet50 and introduced algorithms for extracting and inputting image spectra, visualizing network behavior, and quantifying prediction ambiguity. From the experimental results, our proposed model demonstrated not only high classification accuracy but also higher reliability and interpretability compared to the conventional CNN models that use pixel information from images. Furthermore, our proposed model can detect misclassified data and indicate explicit basis for prediction. The results demonstrated the effectiveness and usefulness of our proposed model from the perspective of credibility and transparency.展开更多
文摘In a convolutional neural network (CNN) classification model for diagnosing medical images, transparency and interpretability of the model’s behavior are crucial in addition to high classification accuracy, and it is highly important to explicitly demonstrate them. In this study, we constructed an interpretable CNN-based model for breast density classification using spectral information from mammograms. We evaluated whether the model’s prediction scores provided reliable probability values using a reliability diagram and visualized the basis for the final prediction. In constructing the classification model, we modified ResNet50 and introduced algorithms for extracting and inputting image spectra, visualizing network behavior, and quantifying prediction ambiguity. From the experimental results, our proposed model demonstrated not only high classification accuracy but also higher reliability and interpretability compared to the conventional CNN models that use pixel information from images. Furthermore, our proposed model can detect misclassified data and indicate explicit basis for prediction. The results demonstrated the effectiveness and usefulness of our proposed model from the perspective of credibility and transparency.