The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.展开更多
The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are ...The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are complex,the edges are blurred,and the sample numbers are unbalanced.To solve these problems,this paper proposes a Multi-branch Cross-scale Interactive Feature fusion Transformer model(MCIF-Transformer Mask RCNN)for PET/CT lung tumor instance segmentation,The main innovative works of this paper are as follows:Firstly,the ResNet-Transformer backbone network is used to extract global feature and local feature in lung images.The pixel dependence relationship is established in local and non-local fields to improve the model perception ability.Secondly,the Cross-scale Interactive Feature Enhancement auxiliary network is designed to provide the shallow features to the deep features,and the cross-scale interactive feature enhancement module(CIFEM)is used to enhance the attention ability of the fine-grained features.Thirdly,the Cross-scale Interactive Feature fusion FPN network(CIF-FPN)is constructed to realize bidirectional interactive fusion between deep features and shallow features,and the low-level features are enhanced in deep semantic features.Finally,4 ablation experiments,3 comparison experiments of detection,3 comparison experiments of segmentation and 6 comparison experiments with two-stage and single-stage instance segmentation networks are done on PET/CT lung medical image datasets.The results showed that APdet,APseg,ARdet and ARseg indexes are improved by 5.5%,5.15%,3.11%and 6.79%compared with Mask RCNN(resnet50).Based on the above research,the precise detection and segmentation of the lesion region are realized in this paper.This method has positive significance for the detection of lung tumors.展开更多
The segmentation process requires separating the image region into sub-regions of similar properties.Each sub-region has a group of pixels having the same characteristics,such as texture or intensity.This paper sugges...The segmentation process requires separating the image region into sub-regions of similar properties.Each sub-region has a group of pixels having the same characteristics,such as texture or intensity.This paper suggests an efficient hybrid segmentation approach for different medical image modalities based on particle swarm optimization(PSO)and improved fast fuzzy C-means clustering(IFFCM)algorithms.An extensive comparative study on different medical images is presented between the proposed approach and other different previous segmentation techniques.The existing medical image segmentation techniques incorporate clustering,thresholding,graph-based,edge-based,active contour,region-based,and watershed algorithms.This paper extensively analyzes and summarizes the comparative investigation of these techniques.Finally,a prediction of the improvement involves the combination of these techniques is suggested.The obtained results demonstrate that the proposed hybrid medical image segmentation approach provides superior outcomes in terms of the examined evaluation metrics compared to the preceding segmentation techniques.展开更多
This paper presents a study of the segmentation of medical images.The paper provides a solid introduction to image enhancement along with image segmentation fundamentals.In the first step,the morphological operations ...This paper presents a study of the segmentation of medical images.The paper provides a solid introduction to image enhancement along with image segmentation fundamentals.In the first step,the morphological operations are employed to ensure image detail protection and noise-immunity.The objective of using morphological operations is to remove the defects in the texture of the image.Secondly,the Fuzzy C-Means(FCM)clustering algorithm is used to modify membership function based only on the spatial neighbors instead of the distance between pixels within local spatial neighbors and cluster centers.The proposed technique is very simple to implement and significantly fast since it is not necessary to compute the distance between the neighboring pixels and the cluster centers.It is also efficient when dealing with noisy images because of its ability to efficiently improve the membership partition matrix.Simulation results are performed on different medical image modalities.Ultrasonic(Us),X-ray(Mammogram),Computed Tomography(CT),Positron Emission Tomography(PET),and Magnetic Resonance(MR)images are the main medical image modalities used in this work.The obtained results illustrate that the proposed technique can achieve good results with a short time and efficient image segmentation.Simulation results on different image modalities show that the proposed technique can achieve segmentation accuracies of 98.83%,99.71%,99.83%,99.85%,and 99.74%for Us,Mammogram,CT,PET,and MRI images,respectively.展开更多
Brown adipose tissue(BAT)is a kind of adipose tissue engaging in thermoregulatory thermogenesis,metaboloregulatory thermogenesis,and secretory.Current studies have revealed that BAT activity is negatively correlated w...Brown adipose tissue(BAT)is a kind of adipose tissue engaging in thermoregulatory thermogenesis,metaboloregulatory thermogenesis,and secretory.Current studies have revealed that BAT activity is negatively correlated with adult body weight and is considered a target tissue for the treatment of obesity and other metabolic-related diseases.Additionally,the activity of BAT presents certain differences between different ages and genders.Clinically,BAT segmentation based on PET/CT data is a reliable method for brown fat research.However,most of the current BAT segmentation methods rely on the experience of doctors.In this paper,an improved U-net network,ICA-Unet,is proposed to achieve automatic and precise segmentation of BAT.First,the traditional 2D convolution layer in the encoder is replaced with a depth-wise overparameterized convolutional(Do-Conv)layer.Second,the channel attention block is introduced between the double-layer convolution.Finally,the image information entropy(IIE)block is added in the skip connections to strengthen the edge features.Furthermore,the performance of this method is evaluated on the dataset of PET/CT images from 368 patients.The results demonstrate a strong agreement between the automatic segmentation of BAT and manual annotation by experts.The average DICE coeffcient(DSC)is 0.9057,and the average Hausdorff distance is 7.2810.Experimental results suggest that the method proposed in this paper can achieve effcient and accurate automatic BAT segmentation and satisfy the clinical requirements of BAT.展开更多
To address the incomplete problem in pulmonary parenchyma segmentation based on the traditional methods, a novel automated segmentation method based on an eight- neighbor region growing algorithm with left-right scann...To address the incomplete problem in pulmonary parenchyma segmentation based on the traditional methods, a novel automated segmentation method based on an eight- neighbor region growing algorithm with left-right scanning and four-corner rotating and scanning is proposed in this pa- per. The proposed method consists of four main stages: image binarization, rough segmentation of lung, image denoising and lung contour refining. First, the binarization of images is done and the regions of interest are extracted. After that, the rough segmentation of lung is performed through a general region growing method. Then the improved eight-neighbor region growing is used to remove noise for the upper, mid- dle, and bottom region of lung. Finally, corrosion and ex- pansion operations are utilized to smooth the lung boundary. The proposed method was validated on chest positron emis- sion tomography-computed tomography (PET-CT) data of 30 cases from a hospital in Shanxi, China. Experimental results show that our method can achieve an average volume overlap ratio of 96.21 ± 0.39% with the manual segmentation results. Compared with the existing methods, the proposed algorithm segments the lung in PET-CT images more efficiently and ac- curately.展开更多
基金supported by Scientific Research Deanship at University of Ha’il,Saudi Arabia through project number RG-23137.
文摘The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.
基金funded by National Natural Science Foundation of China No.62062003Ningxia Natural Science Foundation Project No.2023AAC03293.
文摘The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are complex,the edges are blurred,and the sample numbers are unbalanced.To solve these problems,this paper proposes a Multi-branch Cross-scale Interactive Feature fusion Transformer model(MCIF-Transformer Mask RCNN)for PET/CT lung tumor instance segmentation,The main innovative works of this paper are as follows:Firstly,the ResNet-Transformer backbone network is used to extract global feature and local feature in lung images.The pixel dependence relationship is established in local and non-local fields to improve the model perception ability.Secondly,the Cross-scale Interactive Feature Enhancement auxiliary network is designed to provide the shallow features to the deep features,and the cross-scale interactive feature enhancement module(CIFEM)is used to enhance the attention ability of the fine-grained features.Thirdly,the Cross-scale Interactive Feature fusion FPN network(CIF-FPN)is constructed to realize bidirectional interactive fusion between deep features and shallow features,and the low-level features are enhanced in deep semantic features.Finally,4 ablation experiments,3 comparison experiments of detection,3 comparison experiments of segmentation and 6 comparison experiments with two-stage and single-stage instance segmentation networks are done on PET/CT lung medical image datasets.The results showed that APdet,APseg,ARdet and ARseg indexes are improved by 5.5%,5.15%,3.11%and 6.79%compared with Mask RCNN(resnet50).Based on the above research,the precise detection and segmentation of the lesion region are realized in this paper.This method has positive significance for the detection of lung tumors.
文摘The segmentation process requires separating the image region into sub-regions of similar properties.Each sub-region has a group of pixels having the same characteristics,such as texture or intensity.This paper suggests an efficient hybrid segmentation approach for different medical image modalities based on particle swarm optimization(PSO)and improved fast fuzzy C-means clustering(IFFCM)algorithms.An extensive comparative study on different medical images is presented between the proposed approach and other different previous segmentation techniques.The existing medical image segmentation techniques incorporate clustering,thresholding,graph-based,edge-based,active contour,region-based,and watershed algorithms.This paper extensively analyzes and summarizes the comparative investigation of these techniques.Finally,a prediction of the improvement involves the combination of these techniques is suggested.The obtained results demonstrate that the proposed hybrid medical image segmentation approach provides superior outcomes in terms of the examined evaluation metrics compared to the preceding segmentation techniques.
文摘This paper presents a study of the segmentation of medical images.The paper provides a solid introduction to image enhancement along with image segmentation fundamentals.In the first step,the morphological operations are employed to ensure image detail protection and noise-immunity.The objective of using morphological operations is to remove the defects in the texture of the image.Secondly,the Fuzzy C-Means(FCM)clustering algorithm is used to modify membership function based only on the spatial neighbors instead of the distance between pixels within local spatial neighbors and cluster centers.The proposed technique is very simple to implement and significantly fast since it is not necessary to compute the distance between the neighboring pixels and the cluster centers.It is also efficient when dealing with noisy images because of its ability to efficiently improve the membership partition matrix.Simulation results are performed on different medical image modalities.Ultrasonic(Us),X-ray(Mammogram),Computed Tomography(CT),Positron Emission Tomography(PET),and Magnetic Resonance(MR)images are the main medical image modalities used in this work.The obtained results illustrate that the proposed technique can achieve good results with a short time and efficient image segmentation.Simulation results on different image modalities show that the proposed technique can achieve segmentation accuracies of 98.83%,99.71%,99.83%,99.85%,and 99.74%for Us,Mammogram,CT,PET,and MRI images,respectively.
基金supported in part by the National Natural Science Foundation of China(61701403,82122033,81871379)National Key Research and Development Program of China(2016YFC0103804,2019YFC1521103,2020YFC1523301,2019YFC-1521102)+3 种基金Key R&D Projects in Shaanxi Province(2019ZDLSF07-02,2019ZDLGY10-01)Key R&D Projects in Qinghai Province(2020-SF-143)China Post-doctoral Science Foundation(2018M643719)Young Talent Support Program of the Shaanxi Association for Science and Technology(20190107).
文摘Brown adipose tissue(BAT)is a kind of adipose tissue engaging in thermoregulatory thermogenesis,metaboloregulatory thermogenesis,and secretory.Current studies have revealed that BAT activity is negatively correlated with adult body weight and is considered a target tissue for the treatment of obesity and other metabolic-related diseases.Additionally,the activity of BAT presents certain differences between different ages and genders.Clinically,BAT segmentation based on PET/CT data is a reliable method for brown fat research.However,most of the current BAT segmentation methods rely on the experience of doctors.In this paper,an improved U-net network,ICA-Unet,is proposed to achieve automatic and precise segmentation of BAT.First,the traditional 2D convolution layer in the encoder is replaced with a depth-wise overparameterized convolutional(Do-Conv)layer.Second,the channel attention block is introduced between the double-layer convolution.Finally,the image information entropy(IIE)block is added in the skip connections to strengthen the edge features.Furthermore,the performance of this method is evaluated on the dataset of PET/CT images from 368 patients.The results demonstrate a strong agreement between the automatic segmentation of BAT and manual annotation by experts.The average DICE coeffcient(DSC)is 0.9057,and the average Hausdorff distance is 7.2810.Experimental results suggest that the method proposed in this paper can achieve effcient and accurate automatic BAT segmentation and satisfy the clinical requirements of BAT.
文摘To address the incomplete problem in pulmonary parenchyma segmentation based on the traditional methods, a novel automated segmentation method based on an eight- neighbor region growing algorithm with left-right scanning and four-corner rotating and scanning is proposed in this pa- per. The proposed method consists of four main stages: image binarization, rough segmentation of lung, image denoising and lung contour refining. First, the binarization of images is done and the regions of interest are extracted. After that, the rough segmentation of lung is performed through a general region growing method. Then the improved eight-neighbor region growing is used to remove noise for the upper, mid- dle, and bottom region of lung. Finally, corrosion and ex- pansion operations are utilized to smooth the lung boundary. The proposed method was validated on chest positron emis- sion tomography-computed tomography (PET-CT) data of 30 cases from a hospital in Shanxi, China. Experimental results show that our method can achieve an average volume overlap ratio of 96.21 ± 0.39% with the manual segmentation results. Compared with the existing methods, the proposed algorithm segments the lung in PET-CT images more efficiently and ac- curately.