With the widespread application of deep learning in the field of computer vision,gradually allowing medical image technology to assist doctors in making diagnoses has great practical and research significance.Aiming a...With the widespread application of deep learning in the field of computer vision,gradually allowing medical image technology to assist doctors in making diagnoses has great practical and research significance.Aiming at the shortcomings of the traditional U-Net model in 3D spatial information extraction,model over-fitting,and low degree of semantic information fusion,an improved medical image segmentation model has been used to achieve more accurate segmentation of medical images.In this model,we make full use of the residual network(ResNet)to solve the over-fitting problem.In order to process and aggregate data at different scales,the inception network is used instead of the traditional convolutional layer,and the dilated convolution is used to increase the receptive field.The conditional random field(CRF)can complete the contour refinement work.Compared with the traditional 3D U-Net network,the segmentation accuracy of the improved liver and tumor images increases by 2.89%and 7.66%,respectively.As a part of the image processing process,the method in this paper not only can be used for medical image segmentation,but also can lay the foundation for subsequent image 3D reconstruction work.展开更多
Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions requir...Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions require complex networks with a large number of parameters.It is computationally expensive and results in high requirements on equipment,making it hard to deploy the network in hospitals.In this work,we propose a method for network lightweighting and applied it to a 3D CNN based network.We experimented on a COVID-19 lesion segmentation dataset.Specifically,we use three cascaded one-dimensional convolutions to replace a 3D convolution,and integrate instance normalization with the previous layer of one-dimensional convolutions to accelerate network inference.In addition,we simplify test-time augmentation and deep supervision of the network.Experiments show that the lightweight network can reduce the prediction time of each sample and the memory usage by 50%and reduce the number of parameters by 60%compared with the original network.The training time of one epoch is also reduced by 50%with the segmentation accuracy dropped within the acceptable range.展开更多
Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of...Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of the U-Net expansive path is to map low-resolution encoder feature maps to full input resolution feature maps.However,the consecutive deconvolution and convolutional operations in the expansive path lead to the loss of some high-level information.More high-level information can make the segmentationmore accurate.In this paper,we propose MU-Net,a novel,multi-path upsampling convolution network to retain more high-level information.The MU-Net mainly consists of three parts:contracting path,skip connection,and multi-expansive paths.The proposed MU-Net architecture is evaluated based on three different medical imaging datasets.Our experiments show that MU-Net improves the segmentation performance of U-Net-based methods on different datasets.At the same time,the computational efficiency is significantly improved by reducing the number of parameters by more than half.展开更多
This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automat...This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automated nuclei segmentation due to the diversity of nuclei structures that arise from differences in tissue types and staining protocols,as well as the segmentation of variable-sized and overlapping nuclei.To this extent,the approach proposed in this study uses an ensemble of the UNet architecture with various Convolutional Neural Networks(CNN)architectures as encoder backbones,along with stain normalization and test time augmentation,to improve segmentation accuracy.Additionally,this paper employs a Structure-Preserving Color Normalization(SPCN)technique as a preprocessing step for stain normalization.The proposed model was trained and tested on both single-organ and multi-organ datasets,yielding an F1 score of 84.11%,mean Intersection over Union(IoU)of 81.67%,dice score of 84.11%,accuracy of 92.58%and precision of 83.78%on the multi-organ dataset,and an F1 score of 87.04%,mean IoU of 86.66%,dice score of 87.04%,accuracy of 96.69%and precision of 87.57%on the single-organ dataset.These findings demonstrate that the proposed model ensemble coupled with the right pre-processing and post-processing techniques enhances nuclei segmentation capabilities.展开更多
At present,segmentation for medical image is mainly based on fully supervised model training,which consumes a lot of time and labor for dataset labeling.To address this issue,we propose a semi-supervised medical image...At present,segmentation for medical image is mainly based on fully supervised model training,which consumes a lot of time and labor for dataset labeling.To address this issue,we propose a semi-supervised medical image segmentation model based on a generative adversarial network framework for automated segmentation of arteries.The network is mainly composed of two parts:a segmentation network for medical image segmentation and a discriminant network for evaluating segmentation results.In the initial stage of network training,a fully supervised training method is adopted to make the segmentation network and the discrimination network have certain segmentation and discrimination capabilities.Then a semi-supervised method is adopted to train the model,in which the discriminant network will generate pseudo-labels on the results of the segmentation for semi-supervised training of the segmentation network.The proposed method can use a small part of annotated dataset to realize the segmentation of medical images and effectively solve the problem of insufficient medical image annotation data.展开更多
Human brain consists of millions of cells to control the overall structure of the human body.When these cells start behaving abnormally,then brain tumors occurred.Precise and initial stage brain tumor detection has al...Human brain consists of millions of cells to control the overall structure of the human body.When these cells start behaving abnormally,then brain tumors occurred.Precise and initial stage brain tumor detection has always been an issue in the field of medicines for medical experts.To handle this issue,various deep learning techniques for brain tumor detection and segmentation techniques have been developed,which worked on different datasets to obtain fruitful results,but the problem still exists for the initial stage of detection of brain tumors to save human lives.For this purpose,we proposed a novel U-Net-based Convolutional Neural Network(CNN)technique to detect and segmentizes the brain tumor for Magnetic Resonance Imaging(MRI).Moreover,a 2-dimensional publicly available Multimodal Brain Tumor Image Segmentation(BRATS2020)dataset with 1840 MRI images of brain tumors has been used having an image size of 240×240 pixels.After initial dataset preprocessing the proposed model is trained by dividing the dataset into three parts i.e.,testing,training,and validation process.Our model attained an accuracy value of 0.98%on the BRATS2020 dataset,which is the highest one as compared to the already existing techniques.展开更多
In the study of the composite materials performance,X-ray computed tomography(XCT)scanning has always been one of the important measures to detect the internal structures.CT image segmentation technology will effectiv...In the study of the composite materials performance,X-ray computed tomography(XCT)scanning has always been one of the important measures to detect the internal structures.CT image segmentation technology will effectively improve the accuracy of the subsequent material feature extraction process,which is of great significance to the study of material performance.This study focuses on the low accuracy problem of image segmentation caused by fiber cross-section adhesion in composite CT images.In the core layer area,area validity is evaluated by morphological indicator and an iterative segmentation strategy is proposed based on the watershed algorithm.In the transition layer area,a U-net neural network model trained by using artificial labels is applied to the prediction of segmentation result.Furthermore,a CT image segmentation method for fiber composite materials based on the improved watershed algorithm and the U-net model is proposed.It is verified by experiments that the method has good adaptability and effectiveness to the CT image segmentation problem of composite materials,and the accuracy of segmentation is significantly improved in comparison with the original method,which ensures the accuracy and robustness of the subsequent fiber feature extraction process.展开更多
Deep neural networks are now widely used in the medical image segmentation field for their performance superiority and no need of manual feature extraction.U-Net has been the baseline model since the very beginning du...Deep neural networks are now widely used in the medical image segmentation field for their performance superiority and no need of manual feature extraction.U-Net has been the baseline model since the very beginning due to a symmetricalU-structure for better feature extraction and fusing and suitable for small datasets.To enhance the segmentation performance of U-Net,cascaded U-Net proposes to put two U-Nets successively to segment targets from coarse to fine.However,the plain cascaded U-Net faces the problem of too less between connections so the contextual information learned by the former U-Net cannot be fully used by the latter one.In this article,we devise novel Inner Cascaded U-Net and Inner Cascaded U^(2)-Net as improvements to plain cascaded U-Net for medical image segmentation.The proposed Inner Cascaded U-Net adds inner nested connections between two U-Nets to share more contextual information.To further boost segmentation performance,we propose Inner Cascaded U^(2)-Net,which applies residual U-block to capture more global contextual information from different scales.The proposed models can be trained from scratch in an end-to-end fashion and have been evaluated on Multimodal Brain Tumor Segmentation Challenge(BraTS)2013 and ISBI Liver Tumor Segmentation Challenge(LiTS)dataset in comparison to related U-Net,cascaded U-Net,U-Net++,U^(2)-Net and state-of-the-art methods.Our experiments demonstrate that our proposed Inner Cascaded U-Net and Inner Cascaded U^(2)-Net achieve better segmentation performance in terms of dice similarity coefficient and hausdorff distance as well as get finer outline segmentation.展开更多
Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globall...Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globally.Fundus image segmentation depends on the optic disc(OD)and optic cup(OC).This paper proposes a computational model to segment and classify retinal fundus images for glaucoma detection.Different data augmentation techniques were applied to prevent overfitting while employing several data pre-processing approaches to improve the image quality and achieve high accuracy.The segmentation models are based on an attention U-Net with three separate convolutional neural networks(CNNs)backbones:Inception-v3,visual geometry group 19(VGG19),and residual neural network 50(ResNet50).The classification models also employ a modified version of the above three CNN architectures.Using the RIM-ONE dataset,the attention U-Net with the ResNet50 model as the encoder backbone,achieved the best accuracy of 99.58%in segmenting OD.The Inception-v3 model had the highest accuracy of 98.79%for glaucoma classification among the evaluated segmentation,followed by the modified classification architectures.展开更多
文摘With the widespread application of deep learning in the field of computer vision,gradually allowing medical image technology to assist doctors in making diagnoses has great practical and research significance.Aiming at the shortcomings of the traditional U-Net model in 3D spatial information extraction,model over-fitting,and low degree of semantic information fusion,an improved medical image segmentation model has been used to achieve more accurate segmentation of medical images.In this model,we make full use of the residual network(ResNet)to solve the over-fitting problem.In order to process and aggregate data at different scales,the inception network is used instead of the traditional convolutional layer,and the dilated convolution is used to increase the receptive field.The conditional random field(CRF)can complete the contour refinement work.Compared with the traditional 3D U-Net network,the segmentation accuracy of the improved liver and tumor images increases by 2.89%and 7.66%,respectively.As a part of the image processing process,the method in this paper not only can be used for medical image segmentation,but also can lay the foundation for subsequent image 3D reconstruction work.
文摘Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions require complex networks with a large number of parameters.It is computationally expensive and results in high requirements on equipment,making it hard to deploy the network in hospitals.In this work,we propose a method for network lightweighting and applied it to a 3D CNN based network.We experimented on a COVID-19 lesion segmentation dataset.Specifically,we use three cascaded one-dimensional convolutions to replace a 3D convolution,and integrate instance normalization with the previous layer of one-dimensional convolutions to accelerate network inference.In addition,we simplify test-time augmentation and deep supervision of the network.Experiments show that the lightweight network can reduce the prediction time of each sample and the memory usage by 50%and reduce the number of parameters by 60%compared with the original network.The training time of one epoch is also reduced by 50%with the segmentation accuracy dropped within the acceptable range.
基金The authors received Sichuan Science and Technology Program(No.18YYJC1917)funding for this study.
文摘Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of the U-Net expansive path is to map low-resolution encoder feature maps to full input resolution feature maps.However,the consecutive deconvolution and convolutional operations in the expansive path lead to the loss of some high-level information.More high-level information can make the segmentationmore accurate.In this paper,we propose MU-Net,a novel,multi-path upsampling convolution network to retain more high-level information.The MU-Net mainly consists of three parts:contracting path,skip connection,and multi-expansive paths.The proposed MU-Net architecture is evaluated based on three different medical imaging datasets.Our experiments show that MU-Net improves the segmentation performance of U-Net-based methods on different datasets.At the same time,the computational efficiency is significantly improved by reducing the number of parameters by more than half.
文摘This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automated nuclei segmentation due to the diversity of nuclei structures that arise from differences in tissue types and staining protocols,as well as the segmentation of variable-sized and overlapping nuclei.To this extent,the approach proposed in this study uses an ensemble of the UNet architecture with various Convolutional Neural Networks(CNN)architectures as encoder backbones,along with stain normalization and test time augmentation,to improve segmentation accuracy.Additionally,this paper employs a Structure-Preserving Color Normalization(SPCN)technique as a preprocessing step for stain normalization.The proposed model was trained and tested on both single-organ and multi-organ datasets,yielding an F1 score of 84.11%,mean Intersection over Union(IoU)of 81.67%,dice score of 84.11%,accuracy of 92.58%and precision of 83.78%on the multi-organ dataset,and an F1 score of 87.04%,mean IoU of 86.66%,dice score of 87.04%,accuracy of 96.69%and precision of 87.57%on the single-organ dataset.These findings demonstrate that the proposed model ensemble coupled with the right pre-processing and post-processing techniques enhances nuclei segmentation capabilities.
基金supported in part by the National Natural Science Foundation of China (No.62002392)in part by the Key Research and Development Plan of Hunan Province (No.2019SK2022)+1 种基金in part by the Natural Science Foundation of Hunan Province (No.2020JJ4140 and 2020JJ4141)in part by the Postgraduate Excellent teaching team Project of Hunan Province[Grant[2019]370–133]。
文摘At present,segmentation for medical image is mainly based on fully supervised model training,which consumes a lot of time and labor for dataset labeling.To address this issue,we propose a semi-supervised medical image segmentation model based on a generative adversarial network framework for automated segmentation of arteries.The network is mainly composed of two parts:a segmentation network for medical image segmentation and a discriminant network for evaluating segmentation results.In the initial stage of network training,a fully supervised training method is adopted to make the segmentation network and the discrimination network have certain segmentation and discrimination capabilities.Then a semi-supervised method is adopted to train the model,in which the discriminant network will generate pseudo-labels on the results of the segmentation for semi-supervised training of the segmentation network.The proposed method can use a small part of annotated dataset to realize the segmentation of medical images and effectively solve the problem of insufficient medical image annotation data.
基金the support of the Deputy for Research and Innovation-Ministry of Education,Kingdom of Saudi Arabia for funding this research through a project(NU/IFC/ENT/01/014)under the institutional funding committee at Najran University,Kingdom of Saudi Arabia.
文摘Human brain consists of millions of cells to control the overall structure of the human body.When these cells start behaving abnormally,then brain tumors occurred.Precise and initial stage brain tumor detection has always been an issue in the field of medicines for medical experts.To handle this issue,various deep learning techniques for brain tumor detection and segmentation techniques have been developed,which worked on different datasets to obtain fruitful results,but the problem still exists for the initial stage of detection of brain tumors to save human lives.For this purpose,we proposed a novel U-Net-based Convolutional Neural Network(CNN)technique to detect and segmentizes the brain tumor for Magnetic Resonance Imaging(MRI).Moreover,a 2-dimensional publicly available Multimodal Brain Tumor Image Segmentation(BRATS2020)dataset with 1840 MRI images of brain tumors has been used having an image size of 240×240 pixels.After initial dataset preprocessing the proposed model is trained by dividing the dataset into three parts i.e.,testing,training,and validation process.Our model attained an accuracy value of 0.98%on the BRATS2020 dataset,which is the highest one as compared to the already existing techniques.
文摘In the study of the composite materials performance,X-ray computed tomography(XCT)scanning has always been one of the important measures to detect the internal structures.CT image segmentation technology will effectively improve the accuracy of the subsequent material feature extraction process,which is of great significance to the study of material performance.This study focuses on the low accuracy problem of image segmentation caused by fiber cross-section adhesion in composite CT images.In the core layer area,area validity is evaluated by morphological indicator and an iterative segmentation strategy is proposed based on the watershed algorithm.In the transition layer area,a U-net neural network model trained by using artificial labels is applied to the prediction of segmentation result.Furthermore,a CT image segmentation method for fiber composite materials based on the improved watershed algorithm and the U-net model is proposed.It is verified by experiments that the method has good adaptability and effectiveness to the CT image segmentation problem of composite materials,and the accuracy of segmentation is significantly improved in comparison with the original method,which ensures the accuracy and robustness of the subsequent fiber feature extraction process.
基金supported in part by the National Nature Science Foundation of China(No.62172299)in part by the Shanghai Municipal Science and Technology Major Project(No.2021SHZDZX0100)in part by the Fundamental Research Funds for the Central Universi-ties of China.
文摘Deep neural networks are now widely used in the medical image segmentation field for their performance superiority and no need of manual feature extraction.U-Net has been the baseline model since the very beginning due to a symmetricalU-structure for better feature extraction and fusing and suitable for small datasets.To enhance the segmentation performance of U-Net,cascaded U-Net proposes to put two U-Nets successively to segment targets from coarse to fine.However,the plain cascaded U-Net faces the problem of too less between connections so the contextual information learned by the former U-Net cannot be fully used by the latter one.In this article,we devise novel Inner Cascaded U-Net and Inner Cascaded U^(2)-Net as improvements to plain cascaded U-Net for medical image segmentation.The proposed Inner Cascaded U-Net adds inner nested connections between two U-Nets to share more contextual information.To further boost segmentation performance,we propose Inner Cascaded U^(2)-Net,which applies residual U-block to capture more global contextual information from different scales.The proposed models can be trained from scratch in an end-to-end fashion and have been evaluated on Multimodal Brain Tumor Segmentation Challenge(BraTS)2013 and ISBI Liver Tumor Segmentation Challenge(LiTS)dataset in comparison to related U-Net,cascaded U-Net,U-Net++,U^(2)-Net and state-of-the-art methods.Our experiments demonstrate that our proposed Inner Cascaded U-Net and Inner Cascaded U^(2)-Net achieve better segmentation performance in terms of dice similarity coefficient and hausdorff distance as well as get finer outline segmentation.
文摘Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globally.Fundus image segmentation depends on the optic disc(OD)and optic cup(OC).This paper proposes a computational model to segment and classify retinal fundus images for glaucoma detection.Different data augmentation techniques were applied to prevent overfitting while employing several data pre-processing approaches to improve the image quality and achieve high accuracy.The segmentation models are based on an attention U-Net with three separate convolutional neural networks(CNNs)backbones:Inception-v3,visual geometry group 19(VGG19),and residual neural network 50(ResNet50).The classification models also employ a modified version of the above three CNN architectures.Using the RIM-ONE dataset,the attention U-Net with the ResNet50 model as the encoder backbone,achieved the best accuracy of 99.58%in segmenting OD.The Inception-v3 model had the highest accuracy of 98.79%for glaucoma classification among the evaluated segmentation,followed by the modified classification architectures.