Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are ...The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are complex,the edges are blurred,and the sample numbers are unbalanced.To solve these problems,this paper proposes a Multi-branch Cross-scale Interactive Feature fusion Transformer model(MCIF-Transformer Mask RCNN)for PET/CT lung tumor instance segmentation,The main innovative works of this paper are as follows:Firstly,the ResNet-Transformer backbone network is used to extract global feature and local feature in lung images.The pixel dependence relationship is established in local and non-local fields to improve the model perception ability.Secondly,the Cross-scale Interactive Feature Enhancement auxiliary network is designed to provide the shallow features to the deep features,and the cross-scale interactive feature enhancement module(CIFEM)is used to enhance the attention ability of the fine-grained features.Thirdly,the Cross-scale Interactive Feature fusion FPN network(CIF-FPN)is constructed to realize bidirectional interactive fusion between deep features and shallow features,and the low-level features are enhanced in deep semantic features.Finally,4 ablation experiments,3 comparison experiments of detection,3 comparison experiments of segmentation and 6 comparison experiments with two-stage and single-stage instance segmentation networks are done on PET/CT lung medical image datasets.The results showed that APdet,APseg,ARdet and ARseg indexes are improved by 5.5%,5.15%,3.11%and 6.79%compared with Mask RCNN(resnet50).Based on the above research,the precise detection and segmentation of the lesion region are realized in this paper.This method has positive significance for the detection of lung tumors.展开更多
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金funded by National Natural Science Foundation of China No.62062003Ningxia Natural Science Foundation Project No.2023AAC03293.
文摘The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are complex,the edges are blurred,and the sample numbers are unbalanced.To solve these problems,this paper proposes a Multi-branch Cross-scale Interactive Feature fusion Transformer model(MCIF-Transformer Mask RCNN)for PET/CT lung tumor instance segmentation,The main innovative works of this paper are as follows:Firstly,the ResNet-Transformer backbone network is used to extract global feature and local feature in lung images.The pixel dependence relationship is established in local and non-local fields to improve the model perception ability.Secondly,the Cross-scale Interactive Feature Enhancement auxiliary network is designed to provide the shallow features to the deep features,and the cross-scale interactive feature enhancement module(CIFEM)is used to enhance the attention ability of the fine-grained features.Thirdly,the Cross-scale Interactive Feature fusion FPN network(CIF-FPN)is constructed to realize bidirectional interactive fusion between deep features and shallow features,and the low-level features are enhanced in deep semantic features.Finally,4 ablation experiments,3 comparison experiments of detection,3 comparison experiments of segmentation and 6 comparison experiments with two-stage and single-stage instance segmentation networks are done on PET/CT lung medical image datasets.The results showed that APdet,APseg,ARdet and ARseg indexes are improved by 5.5%,5.15%,3.11%and 6.79%compared with Mask RCNN(resnet50).Based on the above research,the precise detection and segmentation of the lesion region are realized in this paper.This method has positive significance for the detection of lung tumors.