Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional MagneticResonance Image (MRI) and Computed Tomography (CT) scans utilizing3D U-Net...Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional MagneticResonance Image (MRI) and Computed Tomography (CT) scans utilizing3D U-Net Design and ResNet50, taken after by conventional classificationstrategies. In this inquire, the ResNet50 picked up accuracy with 98.96%, andthe 3D U-Net scored 97.99% among the different methods of deep learning.It is to be mentioned that traditional Convolutional Neural Network (CNN)gives 97.90% accuracy on top of the 3D MRI. In expansion, the imagefusion approach combines the multimodal images and makes a fused image toextricate more highlights from the medical images. Other than that, we haveidentified the loss function by utilizing several dice measurements approachand received Dice Result on top of a specific test case. The average mean scoreof dice coefficient and soft dice loss for three test cases was 0.0980. At thesame time, for two test cases, the sensitivity and specification were recordedto be 0.0211 and 0.5867 using patch level predictions. On the other hand,a software integration pipeline was integrated to deploy the concentratedmodel into the webserver for accessing it from the software system using theRepresentational state transfer (REST) API. Eventually, the suggested modelswere validated through the Area Under the Curve–Receiver CharacteristicOperator (AUC–ROC) curve and Confusion Matrix and compared with theexisting research articles to understand the underlying problem. ThroughComparative Analysis, we have extracted meaningful insights regarding braintumour segmentation and figured out potential gaps. Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imagingmodalities.展开更多
基金This study was funded by the Deanship of Scientific Research,Taif University Researchers Supporting Project number(TURSP-2020/348),Taif University,Taif,Saudi Arabia.
文摘Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional MagneticResonance Image (MRI) and Computed Tomography (CT) scans utilizing3D U-Net Design and ResNet50, taken after by conventional classificationstrategies. In this inquire, the ResNet50 picked up accuracy with 98.96%, andthe 3D U-Net scored 97.99% among the different methods of deep learning.It is to be mentioned that traditional Convolutional Neural Network (CNN)gives 97.90% accuracy on top of the 3D MRI. In expansion, the imagefusion approach combines the multimodal images and makes a fused image toextricate more highlights from the medical images. Other than that, we haveidentified the loss function by utilizing several dice measurements approachand received Dice Result on top of a specific test case. The average mean scoreof dice coefficient and soft dice loss for three test cases was 0.0980. At thesame time, for two test cases, the sensitivity and specification were recordedto be 0.0211 and 0.5867 using patch level predictions. On the other hand,a software integration pipeline was integrated to deploy the concentratedmodel into the webserver for accessing it from the software system using theRepresentational state transfer (REST) API. Eventually, the suggested modelswere validated through the Area Under the Curve–Receiver CharacteristicOperator (AUC–ROC) curve and Confusion Matrix and compared with theexisting research articles to understand the underlying problem. ThroughComparative Analysis, we have extracted meaningful insights regarding braintumour segmentation and figured out potential gaps. Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imagingmodalities.