Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ...Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.展开更多
With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image t...With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.展开更多
To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cau...To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cauchy mutation.First,Sin chaos is introduced to improve the random population initialization scheme of the CHOA,which not only guarantees the diversity of the population,but also enhances the distribution uniformity of the initial population.Next,Cauchy mutation is added to optimize the global search ability of the CHOA in the process of position(threshold)updating to avoid the CHOA falling into local optima.Finally,an improved CHOA was formed through the combination of chaos initialization and Cauchy mutation(CICMCHOA),then taking fuzzy Kapur as the objective function,this paper applied CICMCHOA to natural and medical image segmentation,and compared it with four algorithms,including the improved Satin Bowerbird optimizer(ISBO),Cuckoo Search(ICS),etc.The experimental results deriving from visual and specific indicators demonstrate that CICMCHOA delivers superior segmentation effects in image segmentation.展开更多
Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi...Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.展开更多
The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional imag...The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional image segmentation methods often struggle to capture fine details such as edges and contours,limiting their effectiveness in identifying areas prone to energy loss.To address this challenge,we propose a novel segmentation methodology that combines object-wise processing with a two-stage deep learning model,Cascade U-Net.Object-wise processing isolates components of the facade,such as walls and windows,for independent analysis,while Cascade U-Net incorporates contour information to enhance segmentation accuracy.The methodology involves four steps:object isolation,which crops and adjusts the image based on bounding boxes;contour extraction,which derives contours;image segmentation,which modifies and reuses contours as guide data in Cascade U-Net to segment areas;and segmentation synthesis,which integrates the results obtained for each object to produce the final segmentation map.Applied to a dataset of Korean building images,the proposed method significantly outperformed traditional models,demonstrating improved accuracy and the ability to preserve critical structural details.Furthermore,we applied this approach to classify window thermal loss in real-world scenarios using infrared images,showing its potential to identify windows vulnerable to energy loss.Notably,our Cascade U-Net,which builds upon the relatively lightweight U-Net architecture,also exhibited strong performance,reinforcing the practical value of this method.Our approach offers a practical solution for enhancing energy efficiency in buildings by providing more precise segmentation results.展开更多
With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection abil...With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.展开更多
Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irreg...Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.展开更多
Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to fac...Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.展开更多
Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as ...Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as it enables accurate diagnosis,treatment planning,and monitoring of various diseases and conditions.Due to the lack of sufficient medical images,it is challenging to achieve an accurate segmentation,especially with the application of deep learning networks.The aim of this work is to study transfer learning from T1-weighted(T1-w)to T2-weighted(T2-w)MR sequences to enhance bone segmentation with minimal required computation resources.With the use of an excitation-based convolutional neural networks,four transfer learning mechanisms are proposed:transfer learning without fine tuning,open fine tuning,conservative fine tuning,and hybrid transfer learning.Moreover,a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique.The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources.The segmentation results are evaluated using 14 clinical 3D brain MR and CT images.The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393±0.0007.Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation,it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.展开更多
In the present research,we describe a computer-aided detection(CAD)method aimed at automatic fetal head circumference(HC)measurement in 2D ultrasonography pictures during all trimesters of pregnancy.The HC might be ut...In the present research,we describe a computer-aided detection(CAD)method aimed at automatic fetal head circumference(HC)measurement in 2D ultrasonography pictures during all trimesters of pregnancy.The HC might be utilized toward determining gestational age and tracking fetal development.This automated approach is particularly valuable in low-resource settings where access to trained sonographers is limited.The CAD system is divided into two steps:to begin,Haar-like characteristics were extracted from ultrasound pictures in order to train a classifier using random forests to find the fetal skull.We identified the HC using dynamic programming,an elliptical fit,and a Hough transform.The computer-aided detection(CAD)program was well-trained on 999 pictures(HC18 challenge data source),and then verified on 335 photos from all trimesters in an independent test set.A skilled sonographer and an expert in medicine personally marked the test set.We used the crown-rump length(CRL)measurement to calculate the reference gestational age(GA).In the first,second,and third trimesters,the median difference between the standard GA and the GA calculated by the skilled sonographer stayed at 0.7±2.7,0.0±4.5,and 2.0±12.0 days,respectively.The regular duration variance between the baseline GA and the health investigator’s GA remained 1.5±3.0,1.9±5.0,and 4.0±14 a couple of days.The mean variance between the standard GA and the CAD system’s GA remained between 0.5 and 5.0,with an additional variation of 2.9 to 12.5 days.The outcomes reveal that the computer-aided detection(CAD)program outperforms an expert sonographer.When paired with the classifications reported in the literature,the provided system achieves results that are comparable or even better.We have assessed and scheduled this computerized approach for HC evaluation,which includes information from all trimesters of gestation.展开更多
Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researche...Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researchers began to exploit the“pre-training and fine-tuning”training process for multi-element segmentation,reducing the time spent on manual annotation.However,the existing element segmentation model only focuses on the overall accuracy at the pixel level,ignoring whether the element connectivity relationship can be correctly identified.To this end,this paper proposes a PCB CT image element segmentation model optimizing the semantic perception of connectivity relationship(OSPC-seg).The overall training process adopts a“pre-training and fine-tuning”training process.A loss function that optimizes the semantic perception of circuit connectivity relationship(OSPC Loss)is designed from the aspect of alleviating the class imbalance problem and improving the correct connectivity rate.Also,the correct connectivity rate index(CCR)is proposed to evaluate the model’s connectivity relationship recognition capabilities.Experiments show that mIoU and CCR of OSPC-seg on our datasets are 90.1%and 97.0%,improved by 1.5%and 1.6%respectively compared with the baseline model.From visualization results,it can be seen that the segmentation performance of connection positions is significantly improved,which also demonstrates the effectiveness of OSPC-seg.展开更多
Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in ...Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.展开更多
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans...Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.展开更多
In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussi...In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussian kernel(GA-LRBF)for spatial discretization.Compared to the standard radial basis functionmethod,this approach consumes less CPU time and maintains good stability because it uses only a small subset of points in the whole computational domain.Additionally,since the Gaussian function has the property of dimensional separation,the GA-LRBF method is suitable for dealing with isotropic images.Finally,a numerical scheme that couples GA-LRBF with the fourth-order Runge–Kutta method is applied to the C-V model,and a comparison of some numerical results demonstrates that this scheme achieves much more reliable image segmentation.展开更多
●AIM:To investigate a pioneering framework for the segmentation of meibomian glands(MGs),using limited annotations to reduce the workload on ophthalmologists and enhance the efficiency of clinical diagnosis.●METHODS...●AIM:To investigate a pioneering framework for the segmentation of meibomian glands(MGs),using limited annotations to reduce the workload on ophthalmologists and enhance the efficiency of clinical diagnosis.●METHODS:Totally 203 infrared meibomian images from 138 patients with dry eye disease,accompanied by corresponding annotations,were gathered for the study.A rectified scribble-supervised gland segmentation(RSSGS)model,incorporating temporal ensemble prediction,uncertainty estimation,and a transformation equivariance constraint,was introduced to address constraints imposed by limited supervision information inherent in scribble annotations.The viability and efficacy of the proposed model were assessed based on accuracy,intersection over union(IoU),and dice coefficient.●RESULTS:Using manual labels as the gold standard,RSSGS demonstrated outcomes with an accuracy of 93.54%,a dice coefficient of 78.02%,and an IoU of 64.18%.Notably,these performance metrics exceed the current weakly supervised state-of-the-art methods by 0.76%,2.06%,and 2.69%,respectively.Furthermore,despite achieving a substantial 80%reduction in annotation costs,it only lags behind fully annotated methods by 0.72%,1.51%,and 2.04%.●CONCLUSION:An innovative automatic segmentation model is developed for MGs in infrared eyelid images,using scribble annotation for training.This model maintains an exceptionally high level of segmentation accuracy while substantially reducing training costs.It holds substantial utility for calculating clinical parameters,thereby greatly enhancing the diagnostic efficiency of ophthalmologists in evaluating meibomian gland dysfunction.展开更多
Robust, accurate, and fast monitoring of residual plastic film (RPF) pollution in farmlands has great significance. Based on CBAM-DBNet, this study proposed a threshold-adaptive joint framework for identifying the RPF...Robust, accurate, and fast monitoring of residual plastic film (RPF) pollution in farmlands has great significance. Based on CBAM-DBNet, this study proposed a threshold-adaptive joint framework for identifying the RPF on farmland surfaces and estimating its coverage rate. UAV imaging was used to gather images of the RPF from several locations with various soil backgrounds. RPFs were manually labeled, and the degree of RPF pollution was defined based on the RPF coverage rate. Combining differentiable binarization network (DBNet) with the convolutional block attention module (CBAM), whose feature extraction module was improved. A dynamic adaptive binarization threshold formula was defined for segmenting the RPF’s approximate binary map. Regarding the RPF image detection branch, the CBAM-DBNet exhibited a precision (P) value of 85.81%, a recall (R) value of 82.69%, and an F1-score (F1) value of 84.22%, which was 1.09 percentage points higher than the DBNet in the comprehensive index F1 value. For the RPF image segmentation branch, using CBAM-DBNet to segment the RPF image combined with an adaptive binarization threshold formula. Subsequently, the mean absolute percentage error (MAPE), root mean square error (RMSE), and mean absolute error (MAE) of the prediction of RPF’s coverage rate were 0.276, 0.366, and 0.605, respectively, outperforming the DBNet and the Iterative Threshold method. This study provides a theoretical reference for the further development of evaluation technology for RPF pollution based on UAV imaging.展开更多
In this paper,we design an efficient,multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation(AITV).The segmentation framework generally consists of...In this paper,we design an efficient,multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation(AITV).The segmentation framework generally consists of two stages:smoothing and thresholding,thus referred to as smoothing-and-thresholding(SaT).In the first stage,a smoothed image is obtained by an AITV-regularized Mumford-Shah(MS)model,which can be solved efficiently by the alternating direction method of multipliers(ADMMs)with a closed-form solution of a proximal operator of the l_(1)-αl_(2) regularizer.The convergence of the ADMM algorithm is analyzed.In the second stage,we threshold the smoothed image by K-means clustering to obtain the final segmentation result.Numerical experiments demonstrate that the proposed segmentation framework is versatile for both grayscale and color images,effcient in producing high-quality segmentation results within a few seconds,and robust to input images that are corrupted with noise,blur,or both.We compare the AITV method with its original convex TV and nonconvex TVP(O<p<1)counterparts,showcasing the qualitative and quantitative advantages of our proposed method.展开更多
Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance inte...Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance interdependence, which limits the segmentation performance. Transformer has been successfully applied to various computer vision, using self-attention mechanism to simulate long-distance interaction, so as to capture global information. However, self-attention lacks spatial location and high-performance computing. In order to solve the above problems, we develop a new medical transformer, which has a multi-scale context fusion function and can be used for medical image segmentation. The proposed model combines convolution operation and attention mechanism to form a u-shaped framework, which can capture both local and global information. First, the traditional converter module is improved to an advanced converter module, which uses post-layer normalization to obtain mild activation values, and uses scaled cosine attention with a moving window to obtain accurate spatial information. Secondly, we also introduce a deep supervision strategy to guide the model to fuse multi-scale feature information. It further enables the proposed model to effectively propagate feature information across layers, Thanks to this, it can achieve better segmentation performance while being more robust and efficient. The proposed model is evaluated on multiple medical image segmentation datasets. Experimental results demonstrate that the proposed model achieves better performance on a challenging dataset (ETIS) compared to existing methods that rely only on convolutional neural networks, transformers, or a combination of both. The mDice and mIou indicators increased by 2.74% and 3.3% respectively.展开更多
BACKGROUND Small intestinal vascular malformations(angiodysplasias)are common causes of small intestinal bleeding.While capsule endoscopy has become the primary diagnostic method for angiodysplasia,manual reading of t...BACKGROUND Small intestinal vascular malformations(angiodysplasias)are common causes of small intestinal bleeding.While capsule endoscopy has become the primary diagnostic method for angiodysplasia,manual reading of the entire gastrointestinal tract is time-consuming and requires a heavy workload,which affects the accuracy of diagnosis.AIM To evaluate whether artificial intelligence can assist the diagnosis and increase the detection rate of angiodysplasias in the small intestine,achieve automatic disease detection,and shorten the capsule endoscopy(CE)reading time.METHODS A convolutional neural network semantic segmentation model with a feature fusion method,which automatically recognizes the category of vascular dysplasia under CE and draws the lesion contour,thus improving the efficiency and accuracy of identifying small intestinal vascular malformation lesions,was proposed.Resnet-50 was used as the skeleton network to design the fusion mechanism,fuse the shallow and depth features,and classify the images at the pixel level to achieve the segmentation and recognition of vascular dysplasia.The training set and test set were constructed and compared with PSPNet,Deeplab3+,and UperNet.RESULTS The test set constructed in the study achieved satisfactory results,where pixel accuracy was 99%,mean intersection over union was 0.69,negative predictive value was 98.74%,and positive predictive value was 94.27%.The model parameter was 46.38 M,the float calculation was 467.2 G,and the time length to segment and recognize a picture was 0.6 s.CONCLUSION Constructing a segmentation network based on deep learning to segment and recognize angiodysplasias lesions is an effective and feasible method for diagnosing angiodysplasias lesions.展开更多
Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical ima...Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical imaging. With the recent advances in deep learning (DL) and itsconfounding results in image segmentation, more attention has been drawnto its use in medical image segmentation. This article introduces a surveyof the state-of-the-art deep convolution neural network (CNN) models andmechanisms utilized in image segmentation. First, segmentation models arecategorized based on their model architecture and primary working principle.Then, CNN categories are described, and various models are discussed withineach category. Compared with other existing surveys, several applicationswith multiple architectural adaptations are discussed within each category.A comparative summary is included to give the reader insights into utilizedarchitectures in different applications and datasets. This study focuses onmedical image segmentation applications, where the most widely used architecturesare illustrated, and other promising models are suggested that haveproven their success in different domains. Finally, the present work discussescurrent limitations and solutions along with future trends in the field.展开更多
基金financially supported by the National Key Research and Development Program(Grant No.2022YFE0107000)the General Projects of the National Natural Science Foundation of China(Grant No.52171259)the High-Tech Ship Research Project of the Ministry of Industry and Information Technology(Grant No.[2021]342)。
文摘Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.
基金supported in part by collaborative research with Toyota Motor Corporation,in part by ROIS NII Open Collaborative Research under Grant 21S0601,in part by JSPS KAKENHI under Grants 20H00592,21H03424.
文摘With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.
基金This work is supported by Natural Science Foundation of Anhui under Grant 1908085MF207,KJ2020A1215,KJ2021A1251 and 2023AH052856the Excellent Youth Talent Support Foundation of Anhui underGrant gxyqZD2021142the Quality Engineering Project of Anhui under Grant 2021jyxm1117,2021kcszsfkc307,2022xsxx158 and 2022jcbs043.
文摘To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cauchy mutation.First,Sin chaos is introduced to improve the random population initialization scheme of the CHOA,which not only guarantees the diversity of the population,but also enhances the distribution uniformity of the initial population.Next,Cauchy mutation is added to optimize the global search ability of the CHOA in the process of position(threshold)updating to avoid the CHOA falling into local optima.Finally,an improved CHOA was formed through the combination of chaos initialization and Cauchy mutation(CICMCHOA),then taking fuzzy Kapur as the objective function,this paper applied CICMCHOA to natural and medical image segmentation,and compared it with four algorithms,including the improved Satin Bowerbird optimizer(ISBO),Cuckoo Search(ICS),etc.The experimental results deriving from visual and specific indicators demonstrate that CICMCHOA delivers superior segmentation effects in image segmentation.
基金supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606supported by a DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowship+1 种基金supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20170668PRD1 and 20210213ERsupported by the NGA under Contract No.HM04762110003.
文摘Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.
基金supported by Korea Institute for Advancement of Technology(KIAT):P0017123,the Competency Development Program for Industry Specialist.
文摘The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional image segmentation methods often struggle to capture fine details such as edges and contours,limiting their effectiveness in identifying areas prone to energy loss.To address this challenge,we propose a novel segmentation methodology that combines object-wise processing with a two-stage deep learning model,Cascade U-Net.Object-wise processing isolates components of the facade,such as walls and windows,for independent analysis,while Cascade U-Net incorporates contour information to enhance segmentation accuracy.The methodology involves four steps:object isolation,which crops and adjusts the image based on bounding boxes;contour extraction,which derives contours;image segmentation,which modifies and reuses contours as guide data in Cascade U-Net to segment areas;and segmentation synthesis,which integrates the results obtained for each object to produce the final segmentation map.Applied to a dataset of Korean building images,the proposed method significantly outperformed traditional models,demonstrating improved accuracy and the ability to preserve critical structural details.Furthermore,we applied this approach to classify window thermal loss in real-world scenarios using infrared images,showing its potential to identify windows vulnerable to energy loss.Notably,our Cascade U-Net,which builds upon the relatively lightweight U-Net architecture,also exhibited strong performance,reinforcing the practical value of this method.Our approach offers a practical solution for enhancing energy efficiency in buildings by providing more precise segmentation results.
基金supported in part by the Tianjin Technology Innovation Guidance Special Fund Project under Grant No.21YDTPJC00850in part by the National Natural Science Foundation of China under Grant No.41906161in part by the Natural Science Foundation of Tianjin under Grant No.21JCQNJC00650。
文摘With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.
基金National Natural Science Foundation of China,Grant/Award Numbers:62377026,62201222Knowledge Innovation Program of Wuhan-Shuguang Project,Grant/Award Number:2023010201020382+1 种基金National Key Research and Development Programme of China,Grant/Award Number:2022YFD1700204Fundamental Research Funds for the Central Universities,Grant/Award Numbers:CCNU22QN014,CCNU22JC007,CCNU22XJ034.
文摘Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.
基金This work is supported in part by The National Natural Science Foundation of China(Grant Number 61971078),which provided domain expertise and computational power that greatly assisted the activityThis work was financially supported by Chongqing Municipal Education Commission Grants for-Major Science and Technology Project(Grant Number gzlcx20243175).
文摘Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.
基金Swiss National Science Foundation,Grant/Award Number:SNSF 320030_176052Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung,Grant/Award Number:320030_176052。
文摘Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as it enables accurate diagnosis,treatment planning,and monitoring of various diseases and conditions.Due to the lack of sufficient medical images,it is challenging to achieve an accurate segmentation,especially with the application of deep learning networks.The aim of this work is to study transfer learning from T1-weighted(T1-w)to T2-weighted(T2-w)MR sequences to enhance bone segmentation with minimal required computation resources.With the use of an excitation-based convolutional neural networks,four transfer learning mechanisms are proposed:transfer learning without fine tuning,open fine tuning,conservative fine tuning,and hybrid transfer learning.Moreover,a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique.The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources.The segmentation results are evaluated using 14 clinical 3D brain MR and CT images.The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393±0.0007.Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation,it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.
文摘In the present research,we describe a computer-aided detection(CAD)method aimed at automatic fetal head circumference(HC)measurement in 2D ultrasonography pictures during all trimesters of pregnancy.The HC might be utilized toward determining gestational age and tracking fetal development.This automated approach is particularly valuable in low-resource settings where access to trained sonographers is limited.The CAD system is divided into two steps:to begin,Haar-like characteristics were extracted from ultrasound pictures in order to train a classifier using random forests to find the fetal skull.We identified the HC using dynamic programming,an elliptical fit,and a Hough transform.The computer-aided detection(CAD)program was well-trained on 999 pictures(HC18 challenge data source),and then verified on 335 photos from all trimesters in an independent test set.A skilled sonographer and an expert in medicine personally marked the test set.We used the crown-rump length(CRL)measurement to calculate the reference gestational age(GA).In the first,second,and third trimesters,the median difference between the standard GA and the GA calculated by the skilled sonographer stayed at 0.7±2.7,0.0±4.5,and 2.0±12.0 days,respectively.The regular duration variance between the baseline GA and the health investigator’s GA remained 1.5±3.0,1.9±5.0,and 4.0±14 a couple of days.The mean variance between the standard GA and the CAD system’s GA remained between 0.5 and 5.0,with an additional variation of 2.9 to 12.5 days.The outcomes reveal that the computer-aided detection(CAD)program outperforms an expert sonographer.When paired with the classifications reported in the literature,the provided system achieves results that are comparable or even better.We have assessed and scheduled this computerized approach for HC evaluation,which includes information from all trimesters of gestation.
文摘Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researchers began to exploit the“pre-training and fine-tuning”training process for multi-element segmentation,reducing the time spent on manual annotation.However,the existing element segmentation model only focuses on the overall accuracy at the pixel level,ignoring whether the element connectivity relationship can be correctly identified.To this end,this paper proposes a PCB CT image element segmentation model optimizing the semantic perception of connectivity relationship(OSPC-seg).The overall training process adopts a“pre-training and fine-tuning”training process.A loss function that optimizes the semantic perception of circuit connectivity relationship(OSPC Loss)is designed from the aspect of alleviating the class imbalance problem and improving the correct connectivity rate.Also,the correct connectivity rate index(CCR)is proposed to evaluate the model’s connectivity relationship recognition capabilities.Experiments show that mIoU and CCR of OSPC-seg on our datasets are 90.1%and 97.0%,improved by 1.5%and 1.6%respectively compared with the baseline model.From visualization results,it can be seen that the segmentation performance of connection positions is significantly improved,which also demonstrates the effectiveness of OSPC-seg.
文摘Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.
基金supported by the National Key R&D Program of China(2018AAA0102100)the National Natural Science Foundation of China(No.62376287)+3 种基金the International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province(2021CB1013)the Key Research and Development Program of Hunan Province(2022SK2054)the Natural Science Foundation of Hunan Province(No.2022JJ30762,2023JJ70016)the 111 Project under Grant(No.B18059).
文摘Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.
基金sponsored by Guangdong Basic and Applied Basic Research Foundation under Grant No.2021A1515110680Guangzhou Basic and Applied Basic Research under Grant No.202102020340.
文摘In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussian kernel(GA-LRBF)for spatial discretization.Compared to the standard radial basis functionmethod,this approach consumes less CPU time and maintains good stability because it uses only a small subset of points in the whole computational domain.Additionally,since the Gaussian function has the property of dimensional separation,the GA-LRBF method is suitable for dealing with isotropic images.Finally,a numerical scheme that couples GA-LRBF with the fourth-order Runge–Kutta method is applied to the C-V model,and a comparison of some numerical results demonstrates that this scheme achieves much more reliable image segmentation.
基金Supported by Natural Science Foundation of Fujian Province(No.2020J011084)Fujian Province Technology and Economy Integration Service Platform(No.2023XRH001)Fuzhou-Xiamen-Quanzhou National Independent Innovation Demonstration Zone Collaborative Innovation Platform(No.2022FX5)。
文摘●AIM:To investigate a pioneering framework for the segmentation of meibomian glands(MGs),using limited annotations to reduce the workload on ophthalmologists and enhance the efficiency of clinical diagnosis.●METHODS:Totally 203 infrared meibomian images from 138 patients with dry eye disease,accompanied by corresponding annotations,were gathered for the study.A rectified scribble-supervised gland segmentation(RSSGS)model,incorporating temporal ensemble prediction,uncertainty estimation,and a transformation equivariance constraint,was introduced to address constraints imposed by limited supervision information inherent in scribble annotations.The viability and efficacy of the proposed model were assessed based on accuracy,intersection over union(IoU),and dice coefficient.●RESULTS:Using manual labels as the gold standard,RSSGS demonstrated outcomes with an accuracy of 93.54%,a dice coefficient of 78.02%,and an IoU of 64.18%.Notably,these performance metrics exceed the current weakly supervised state-of-the-art methods by 0.76%,2.06%,and 2.69%,respectively.Furthermore,despite achieving a substantial 80%reduction in annotation costs,it only lags behind fully annotated methods by 0.72%,1.51%,and 2.04%.●CONCLUSION:An innovative automatic segmentation model is developed for MGs in infrared eyelid images,using scribble annotation for training.This model maintains an exceptionally high level of segmentation accuracy while substantially reducing training costs.It holds substantial utility for calculating clinical parameters,thereby greatly enhancing the diagnostic efficiency of ophthalmologists in evaluating meibomian gland dysfunction.
基金supported by the National Natural Science Foundation of China(Grant No.32060288)the National Natural Science Foundation of China(Grant No.32160300)+1 种基金the Bingtuan Science and Technology Program(Grant No.2019AB007)the Science and Technology Planning Project of the first division of Alaer city(Grant No.2022XX06).
文摘Robust, accurate, and fast monitoring of residual plastic film (RPF) pollution in farmlands has great significance. Based on CBAM-DBNet, this study proposed a threshold-adaptive joint framework for identifying the RPF on farmland surfaces and estimating its coverage rate. UAV imaging was used to gather images of the RPF from several locations with various soil backgrounds. RPFs were manually labeled, and the degree of RPF pollution was defined based on the RPF coverage rate. Combining differentiable binarization network (DBNet) with the convolutional block attention module (CBAM), whose feature extraction module was improved. A dynamic adaptive binarization threshold formula was defined for segmenting the RPF’s approximate binary map. Regarding the RPF image detection branch, the CBAM-DBNet exhibited a precision (P) value of 85.81%, a recall (R) value of 82.69%, and an F1-score (F1) value of 84.22%, which was 1.09 percentage points higher than the DBNet in the comprehensive index F1 value. For the RPF image segmentation branch, using CBAM-DBNet to segment the RPF image combined with an adaptive binarization threshold formula. Subsequently, the mean absolute percentage error (MAPE), root mean square error (RMSE), and mean absolute error (MAE) of the prediction of RPF’s coverage rate were 0.276, 0.366, and 0.605, respectively, outperforming the DBNet and the Iterative Threshold method. This study provides a theoretical reference for the further development of evaluation technology for RPF pollution based on UAV imaging.
基金partially supported by the NSF grants DMS-1854434,DMS-1952644,DMS-2151235,DMS-2219904,and CAREER 1846690。
文摘In this paper,we design an efficient,multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation(AITV).The segmentation framework generally consists of two stages:smoothing and thresholding,thus referred to as smoothing-and-thresholding(SaT).In the first stage,a smoothed image is obtained by an AITV-regularized Mumford-Shah(MS)model,which can be solved efficiently by the alternating direction method of multipliers(ADMMs)with a closed-form solution of a proximal operator of the l_(1)-αl_(2) regularizer.The convergence of the ADMM algorithm is analyzed.In the second stage,we threshold the smoothed image by K-means clustering to obtain the final segmentation result.Numerical experiments demonstrate that the proposed segmentation framework is versatile for both grayscale and color images,effcient in producing high-quality segmentation results within a few seconds,and robust to input images that are corrupted with noise,blur,or both.We compare the AITV method with its original convex TV and nonconvex TVP(O<p<1)counterparts,showcasing the qualitative and quantitative advantages of our proposed method.
文摘Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance interdependence, which limits the segmentation performance. Transformer has been successfully applied to various computer vision, using self-attention mechanism to simulate long-distance interaction, so as to capture global information. However, self-attention lacks spatial location and high-performance computing. In order to solve the above problems, we develop a new medical transformer, which has a multi-scale context fusion function and can be used for medical image segmentation. The proposed model combines convolution operation and attention mechanism to form a u-shaped framework, which can capture both local and global information. First, the traditional converter module is improved to an advanced converter module, which uses post-layer normalization to obtain mild activation values, and uses scaled cosine attention with a moving window to obtain accurate spatial information. Secondly, we also introduce a deep supervision strategy to guide the model to fuse multi-scale feature information. It further enables the proposed model to effectively propagate feature information across layers, Thanks to this, it can achieve better segmentation performance while being more robust and efficient. The proposed model is evaluated on multiple medical image segmentation datasets. Experimental results demonstrate that the proposed model achieves better performance on a challenging dataset (ETIS) compared to existing methods that rely only on convolutional neural networks, transformers, or a combination of both. The mDice and mIou indicators increased by 2.74% and 3.3% respectively.
基金Chongqing Technological Innovation and Application Development Project,Key Technologies and Applications of Cross Media Analysis and Reasoning,No.cstc2019jscx-zdztzxX0037.
文摘BACKGROUND Small intestinal vascular malformations(angiodysplasias)are common causes of small intestinal bleeding.While capsule endoscopy has become the primary diagnostic method for angiodysplasia,manual reading of the entire gastrointestinal tract is time-consuming and requires a heavy workload,which affects the accuracy of diagnosis.AIM To evaluate whether artificial intelligence can assist the diagnosis and increase the detection rate of angiodysplasias in the small intestine,achieve automatic disease detection,and shorten the capsule endoscopy(CE)reading time.METHODS A convolutional neural network semantic segmentation model with a feature fusion method,which automatically recognizes the category of vascular dysplasia under CE and draws the lesion contour,thus improving the efficiency and accuracy of identifying small intestinal vascular malformation lesions,was proposed.Resnet-50 was used as the skeleton network to design the fusion mechanism,fuse the shallow and depth features,and classify the images at the pixel level to achieve the segmentation and recognition of vascular dysplasia.The training set and test set were constructed and compared with PSPNet,Deeplab3+,and UperNet.RESULTS The test set constructed in the study achieved satisfactory results,where pixel accuracy was 99%,mean intersection over union was 0.69,negative predictive value was 98.74%,and positive predictive value was 94.27%.The model parameter was 46.38 M,the float calculation was 467.2 G,and the time length to segment and recognize a picture was 0.6 s.CONCLUSION Constructing a segmentation network based on deep learning to segment and recognize angiodysplasias lesions is an effective and feasible method for diagnosing angiodysplasias lesions.
基金supported by the Information Technology Industry Development Agency (ITIDA),Egypt (Project No.CFP181).
文摘Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical imaging. With the recent advances in deep learning (DL) and itsconfounding results in image segmentation, more attention has been drawnto its use in medical image segmentation. This article introduces a surveyof the state-of-the-art deep convolution neural network (CNN) models andmechanisms utilized in image segmentation. First, segmentation models arecategorized based on their model architecture and primary working principle.Then, CNN categories are described, and various models are discussed withineach category. Compared with other existing surveys, several applicationswith multiple architectural adaptations are discussed within each category.A comparative summary is included to give the reader insights into utilizedarchitectures in different applications and datasets. This study focuses onmedical image segmentation applications, where the most widely used architecturesare illustrated, and other promising models are suggested that haveproven their success in different domains. Finally, the present work discussescurrent limitations and solutions along with future trends in the field.