The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye ...The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye detection using fuzzy difference equations in the domain where the retinal images converge.Retinal image detections are categorized as normal eye recognition,suspected glaucomatous eye recognition,and glaucomatous eye recognition.Fuzzy degrees associated with weighted values are calculated to determine the level of concentration between the fuzzy partition and the retinal images.The proposed model was used to diagnose glaucoma using retinal images and involved utilizing the Convolutional Neural Network(CNN)and deep learning to identify the fuzzy weighted regularization between images.This methodology was used to clarify the input images and make them adequate for the process of glaucoma detection.The objective of this study was to propose a novel approach to the early diagnosis of glaucoma using the Fuzzy Expert System(FES)and Fuzzy differential equation(FDE).The intensities of the different regions in the images and their respective peak levels were determined.Once the peak regions were identified,the recurrence relationships among those peaks were then measured.Image partitioning was done due to varying degrees of similar and dissimilar concentrations in the image.Similar and dissimilar concentration levels and spatial frequency generated a threshold image from the combined fuzzy matrix and FDE.This distinguished between a normal and abnormal eye condition,thus detecting patients with glaucomatous eyes.展开更多
Meta-learning of dental X-rays is a machine learning technique that can be used to train models to perform new tasks quickly and with minimal input.Instead of just memorizing a task,this is accomplished through teachi...Meta-learning of dental X-rays is a machine learning technique that can be used to train models to perform new tasks quickly and with minimal input.Instead of just memorizing a task,this is accomplished through teaching a model how to learn.Algorithms for meta-learning are typically trained on a collection of training problems,each of which has a limited number of labelled instances.Multiple Xray classification tasks,including the detection of pneumonia,coronavirus disease 2019,and other disorders,have demonstrated the effectiveness of meta-learning.Meta-learning has the benefit of allowing models to be trained on dental X-ray datasets that are too few for more conventional machine learning methods.Due to the high cost and lengthy collection process associated with dental imaging datasets,this is significant for dental X-ray classification jobs.The ability to train models that are more resistant to fresh input is another benefit of meta-learning.展开更多
This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automat...This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automated nuclei segmentation due to the diversity of nuclei structures that arise from differences in tissue types and staining protocols,as well as the segmentation of variable-sized and overlapping nuclei.To this extent,the approach proposed in this study uses an ensemble of the UNet architecture with various Convolutional Neural Networks(CNN)architectures as encoder backbones,along with stain normalization and test time augmentation,to improve segmentation accuracy.Additionally,this paper employs a Structure-Preserving Color Normalization(SPCN)technique as a preprocessing step for stain normalization.The proposed model was trained and tested on both single-organ and multi-organ datasets,yielding an F1 score of 84.11%,mean Intersection over Union(IoU)of 81.67%,dice score of 84.11%,accuracy of 92.58%and precision of 83.78%on the multi-organ dataset,and an F1 score of 87.04%,mean IoU of 86.66%,dice score of 87.04%,accuracy of 96.69%and precision of 87.57%on the single-organ dataset.These findings demonstrate that the proposed model ensemble coupled with the right pre-processing and post-processing techniques enhances nuclei segmentation capabilities.展开更多
Liver cancer is one of the major diseases with increased mortality in recent years,across the globe.Manual detection of liver cancer is a tedious and laborious task due to which Computer Aided Diagnosis(CAD)models hav...Liver cancer is one of the major diseases with increased mortality in recent years,across the globe.Manual detection of liver cancer is a tedious and laborious task due to which Computer Aided Diagnosis(CAD)models have been developed to detect the presence of liver cancer accurately and classify its stages.Besides,liver cancer segmentation outcome,using medical images,is employed in the assessment of tumor volume,further treatment plans,and response moni-toring.Hence,there is a need exists to develop automated tools for liver cancer detection in a precise manner.With this motivation,the current study introduces an Intelligent Artificial Intelligence with Equilibrium Optimizer based Liver cancer Classification(IAIEO-LCC)model.The proposed IAIEO-LCC technique initially performs Median Filtering(MF)-based pre-processing and data augmentation process.Besides,Kapur’s entropy-based segmentation technique is used to identify the affected regions in liver.Moreover,VGG-19 based feature extractor and Equilibrium Optimizer(EO)-based hyperparameter tuning processes are also involved to derive the feature vectors.At last,Stacked Gated Recurrent Unit(SGRU)classifier is exploited to detect and classify the liver cancer effectively.In order to demonstrate the superiority of the proposed IAIEO-LCC technique in terms of performance,a wide range of simulations was conducted and the results were inspected under different measures.The comparison study results infer that the proposed IAIEO-LCC technique achieved an improved accuracy of 98.52%.展开更多
It has remained a hard nut for years to segment sonar images of jacket installation environment,most of which are noisy images with inevitable blur after noise reduction.For the purpose of solutions to this problem,a ...It has remained a hard nut for years to segment sonar images of jacket installation environment,most of which are noisy images with inevitable blur after noise reduction.For the purpose of solutions to this problem,a fast segmen-tation algorithm is proposed on the basis of the gray value characteristics of sonar images.This algorithm is endowed with the advantage in no need of segmentation thresholds.To realize this goal,we follow the undermentioned steps:first,calcu-late the gray matrix of the fuzzy image background.After adjusting the gray value,the image is divided into three regions:background region,buffer region and target regions.Afterfiltering,we reset the pixels with gray value lower than 255 to binarize images and eliminate most artifacts.Finally,the remaining noise is removed by morphological processing.The simulation results of several sonar images show that the algorithm can segment the fuzzy sonar images quickly and effectively.Thus,the stable and feasible method is testified.展开更多
Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions requir...Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions require complex networks with a large number of parameters.It is computationally expensive and results in high requirements on equipment,making it hard to deploy the network in hospitals.In this work,we propose a method for network lightweighting and applied it to a 3D CNN based network.We experimented on a COVID-19 lesion segmentation dataset.Specifically,we use three cascaded one-dimensional convolutions to replace a 3D convolution,and integrate instance normalization with the previous layer of one-dimensional convolutions to accelerate network inference.In addition,we simplify test-time augmentation and deep supervision of the network.Experiments show that the lightweight network can reduce the prediction time of each sample and the memory usage by 50%and reduce the number of parameters by 60%compared with the original network.The training time of one epoch is also reduced by 50%with the segmentation accuracy dropped within the acceptable range.展开更多
Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ...Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.展开更多
Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images ha...Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.展开更多
This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems...This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems.The primary goal is to tackle issues related to recognizing objects with low brightness levels.In this study,the Intel RealSense Lidar Camera L515 is used to simultaneously capture color information and 16-bit depth information images.The detection scenarios are categorized into normal brightness and low brightness situations.When the system determines a normal brightness environment,normal brightness images are recognized using deep learning methods.In low-brightness situations,three methods are proposed for recognition.The first method is the SegmentationwithDepth image(SD)methodwhich involves segmenting the depth image,creating amask from the segmented depth image,mapping the obtained mask onto the true color(RGB)image to obtain a backgroundreduced RGB image,and recognizing the segmented image.The second method is theHDVmethod(hue,depth,value)which combines RGB images converted to HSV images(hue,saturation,value)with depth images D to form HDV images for recognition.The third method is the HSD(hue,saturation,depth)method which similarly combines RGB images converted to HSV images with depth images D to form HSD images for recognition.In experimental results,in normal brightness environments,the average recognition rate obtained using image recognition methods is 91%.For low-brightness environments,using the SD method with original images for training and segmented images for recognition achieves an average recognition rate of over 82%.TheHDVmethod achieves an average recognition rate of over 70%,while the HSD method achieves an average recognition rate of over 84%.The HSD method allows for a quick and convenient low-light object recognition system.This research outcome can be applied to nighttime surveillance systems or nighttime road safety systems.展开更多
Cloud detection from satellite and drone imagery is crucial for applications such as weather forecasting and environmentalmonitoring.Addressing the limitations of conventional convolutional neural networks,we propose ...Cloud detection from satellite and drone imagery is crucial for applications such as weather forecasting and environmentalmonitoring.Addressing the limitations of conventional convolutional neural networks,we propose an innovative transformer-based method.This method leverages transformers,which are adept at processing data sequences,to enhance cloud detection accuracy.Additionally,we introduce a Cyclic Refinement Architecture that improves the resolution and quality of feature extraction,thereby aiding in the retention of critical details often lost during cloud detection.Our extensive experimental validation shows that our approach significantly outperforms established models,excelling in high-resolution feature extraction and precise cloud segmentation.By integrating Positional Visual Transformers(PVT)with this architecture,our method advances high-resolution feature delineation and segmentation accuracy.Ultimately,our research offers a novel perspective for surmounting traditional challenges in cloud detection and contributes to the advancement of precise and dependable image analysis across various domains.展开更多
To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cau...To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cauchy mutation.First,Sin chaos is introduced to improve the random population initialization scheme of the CHOA,which not only guarantees the diversity of the population,but also enhances the distribution uniformity of the initial population.Next,Cauchy mutation is added to optimize the global search ability of the CHOA in the process of position(threshold)updating to avoid the CHOA falling into local optima.Finally,an improved CHOA was formed through the combination of chaos initialization and Cauchy mutation(CICMCHOA),then taking fuzzy Kapur as the objective function,this paper applied CICMCHOA to natural and medical image segmentation,and compared it with four algorithms,including the improved Satin Bowerbird optimizer(ISBO),Cuckoo Search(ICS),etc.The experimental results deriving from visual and specific indicators demonstrate that CICMCHOA delivers superior segmentation effects in image segmentation.展开更多
Since the fully convolutional network has achieved great success in semantic segmentation,lots of works have been proposed to extract discriminative pixel representations.However,the authors observe that existing meth...Since the fully convolutional network has achieved great success in semantic segmentation,lots of works have been proposed to extract discriminative pixel representations.However,the authors observe that existing methods still suffer from two typical challenges:(i)The intra-class feature variation between different scenes may be large,leading to the difficulty in maintaining the consistency between same-class pixels from different scenes;(ii)The inter-class feature distinction in the same scene could be small,resulting in the limited performance to distinguish different classes in each scene.The authors first rethink se-mantic segmentation from a perspective of similarity between pixels and class centers.Each weight vector of the segmentation head represents its corresponding semantic class in the whole dataset,which can be regarded as the embedding of the class center.Thus,the pixel-wise classification amounts to computing similarity in the final feature space between pixels and the class centers.Under this novel view,the authors propose a Class Center Similarity(CCS)layer to address the above-mentioned challenges by generating adaptive class centers conditioned on each scenes and supervising the similarities between class centers.The CCS layer utilises the Adaptive Class Center Module to generate class centers conditioned on each scene,which adapt the large intra-class variation between different scenes.Specially designed Class Distance Loss(CD Loss)is introduced to control both inter-class and intra-class distances based on the predicted center-to-center and pixel-to-center similarity.Finally,the CCS layer outputs the processed pixel-to-center similarity as the segmentation prediction.Extensive experiments demonstrate that our model performs favourably against the state-of-the-art methods.展开更多
Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesio...Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesion anatomical and functional information effectively and improve the network segmentation performance are key questions.To solve the problem,the Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network(Guide-YNet)is proposed in this paper.Firstly,a double-encoder single-decoder U-Net is used as the backbone in this model,a single-coder single-decoder U-Net is used to generate the saliency guided feature using PET image and transmit it into the skip connection of the backbone,and the high sensitivity of PET images to tumors is used to guide the network to accurately locate lesions.Secondly,a Cross Scale Feature Enhancement Module(CSFEM)is designed to extract multi-scale fusion features after downsampling.Thirdly,a Cross-Layer Interactive Feature Enhancement Module(CIFEM)is designed in the encoder to enhance the spatial position information and semantic information.Finally,a Cross-Dimension Cross-Layer Feature Enhancement Module(CCFEM)is proposed in the decoder,which effectively extractsmultimodal image features through global attention and multi-dimension local attention.The proposed method is verified on the lung multimodal medical image datasets,and the results showthat theMean Intersection overUnion(MIoU),Accuracy(Acc),Dice Similarity Coefficient(Dice),Volumetric overlap error(Voe),Relative volume difference(Rvd)of the proposed method on lung lesion segmentation are 87.27%,93.08%,97.77%,95.92%,89.28%,and 88.68%,respectively.It is of great significance for computer-aided diagnosis.展开更多
Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as ...Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as it enables accurate diagnosis,treatment planning,and monitoring of various diseases and conditions.Due to the lack of sufficient medical images,it is challenging to achieve an accurate segmentation,especially with the application of deep learning networks.The aim of this work is to study transfer learning from T1-weighted(T1-w)to T2-weighted(T2-w)MR sequences to enhance bone segmentation with minimal required computation resources.With the use of an excitation-based convolutional neural networks,four transfer learning mechanisms are proposed:transfer learning without fine tuning,open fine tuning,conservative fine tuning,and hybrid transfer learning.Moreover,a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique.The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources.The segmentation results are evaluated using 14 clinical 3D brain MR and CT images.The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393±0.0007.Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation,it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.展开更多
Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR...Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.展开更多
In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussi...In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussian kernel(GA-LRBF)for spatial discretization.Compared to the standard radial basis functionmethod,this approach consumes less CPU time and maintains good stability because it uses only a small subset of points in the whole computational domain.Additionally,since the Gaussian function has the property of dimensional separation,the GA-LRBF method is suitable for dealing with isotropic images.Finally,a numerical scheme that couples GA-LRBF with the fourth-order Runge–Kutta method is applied to the C-V model,and a comparison of some numerical results demonstrates that this scheme achieves much more reliable image segmentation.展开更多
In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the...In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the rapid development of artificial intelligence,semantic communication has attracted great attention as a new communication paradigm.However,for IoT devices,however,processing image information efficiently in real time is an essential task for the rapid transmission of semantic information.With the increase of model parameters in deep learning methods,the model inference time in sensor devices continues to increase.In contrast,the Pulse Coupled Neural Network(PCNN)has fewer parameters,making it more suitable for processing real-time scene tasks such as image segmentation,which lays the foundation for real-time,effective,and accurate image transmission.However,the parameters of PCNN are determined by trial and error,which limits its application.To overcome this limitation,an Improved Pulse Coupled Neural Networks(IPCNN)model is proposed in this work.The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons,and all its parameters are set adaptively,which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images.Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets.The IPCNN method achieves a better segmentation result without training,providing a new solution for the real-time transmission of image semantic information.展开更多
Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi...Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.展开更多
The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional imag...The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional image segmentation methods often struggle to capture fine details such as edges and contours,limiting their effectiveness in identifying areas prone to energy loss.To address this challenge,we propose a novel segmentation methodology that combines object-wise processing with a two-stage deep learning model,Cascade U-Net.Object-wise processing isolates components of the facade,such as walls and windows,for independent analysis,while Cascade U-Net incorporates contour information to enhance segmentation accuracy.The methodology involves four steps:object isolation,which crops and adjusts the image based on bounding boxes;contour extraction,which derives contours;image segmentation,which modifies and reuses contours as guide data in Cascade U-Net to segment areas;and segmentation synthesis,which integrates the results obtained for each object to produce the final segmentation map.Applied to a dataset of Korean building images,the proposed method significantly outperformed traditional models,demonstrating improved accuracy and the ability to preserve critical structural details.Furthermore,we applied this approach to classify window thermal loss in real-world scenarios using infrared images,showing its potential to identify windows vulnerable to energy loss.Notably,our Cascade U-Net,which builds upon the relatively lightweight U-Net architecture,also exhibited strong performance,reinforcing the practical value of this method.Our approach offers a practical solution for enhancing energy efficiency in buildings by providing more precise segmentation results.展开更多
Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in ...Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.展开更多
基金funding the publication of this research through the Researchers Supporting Program (RSPD2023R809),King Saud University,Riyadh,Saudi Arabia.
文摘The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye detection using fuzzy difference equations in the domain where the retinal images converge.Retinal image detections are categorized as normal eye recognition,suspected glaucomatous eye recognition,and glaucomatous eye recognition.Fuzzy degrees associated with weighted values are calculated to determine the level of concentration between the fuzzy partition and the retinal images.The proposed model was used to diagnose glaucoma using retinal images and involved utilizing the Convolutional Neural Network(CNN)and deep learning to identify the fuzzy weighted regularization between images.This methodology was used to clarify the input images and make them adequate for the process of glaucoma detection.The objective of this study was to propose a novel approach to the early diagnosis of glaucoma using the Fuzzy Expert System(FES)and Fuzzy differential equation(FDE).The intensities of the different regions in the images and their respective peak levels were determined.Once the peak regions were identified,the recurrence relationships among those peaks were then measured.Image partitioning was done due to varying degrees of similar and dissimilar concentrations in the image.Similar and dissimilar concentration levels and spatial frequency generated a threshold image from the combined fuzzy matrix and FDE.This distinguished between a normal and abnormal eye condition,thus detecting patients with glaucomatous eyes.
文摘Meta-learning of dental X-rays is a machine learning technique that can be used to train models to perform new tasks quickly and with minimal input.Instead of just memorizing a task,this is accomplished through teaching a model how to learn.Algorithms for meta-learning are typically trained on a collection of training problems,each of which has a limited number of labelled instances.Multiple Xray classification tasks,including the detection of pneumonia,coronavirus disease 2019,and other disorders,have demonstrated the effectiveness of meta-learning.Meta-learning has the benefit of allowing models to be trained on dental X-ray datasets that are too few for more conventional machine learning methods.Due to the high cost and lengthy collection process associated with dental imaging datasets,this is significant for dental X-ray classification jobs.The ability to train models that are more resistant to fresh input is another benefit of meta-learning.
文摘This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automated nuclei segmentation due to the diversity of nuclei structures that arise from differences in tissue types and staining protocols,as well as the segmentation of variable-sized and overlapping nuclei.To this extent,the approach proposed in this study uses an ensemble of the UNet architecture with various Convolutional Neural Networks(CNN)architectures as encoder backbones,along with stain normalization and test time augmentation,to improve segmentation accuracy.Additionally,this paper employs a Structure-Preserving Color Normalization(SPCN)technique as a preprocessing step for stain normalization.The proposed model was trained and tested on both single-organ and multi-organ datasets,yielding an F1 score of 84.11%,mean Intersection over Union(IoU)of 81.67%,dice score of 84.11%,accuracy of 92.58%and precision of 83.78%on the multi-organ dataset,and an F1 score of 87.04%,mean IoU of 86.66%,dice score of 87.04%,accuracy of 96.69%and precision of 87.57%on the single-organ dataset.These findings demonstrate that the proposed model ensemble coupled with the right pre-processing and post-processing techniques enhances nuclei segmentation capabilities.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,Saudi Arabia has funded this project,under grant no.(FP-206-43).
文摘Liver cancer is one of the major diseases with increased mortality in recent years,across the globe.Manual detection of liver cancer is a tedious and laborious task due to which Computer Aided Diagnosis(CAD)models have been developed to detect the presence of liver cancer accurately and classify its stages.Besides,liver cancer segmentation outcome,using medical images,is employed in the assessment of tumor volume,further treatment plans,and response moni-toring.Hence,there is a need exists to develop automated tools for liver cancer detection in a precise manner.With this motivation,the current study introduces an Intelligent Artificial Intelligence with Equilibrium Optimizer based Liver cancer Classification(IAIEO-LCC)model.The proposed IAIEO-LCC technique initially performs Median Filtering(MF)-based pre-processing and data augmentation process.Besides,Kapur’s entropy-based segmentation technique is used to identify the affected regions in liver.Moreover,VGG-19 based feature extractor and Equilibrium Optimizer(EO)-based hyperparameter tuning processes are also involved to derive the feature vectors.At last,Stacked Gated Recurrent Unit(SGRU)classifier is exploited to detect and classify the liver cancer effectively.In order to demonstrate the superiority of the proposed IAIEO-LCC technique in terms of performance,a wide range of simulations was conducted and the results were inspected under different measures.The comparison study results infer that the proposed IAIEO-LCC technique achieved an improved accuracy of 98.52%.
基金supported by Open Fund Project of China Key Laboratory of Submarine Geoscience(KLSG1802)Science&Technology Project of China Ocean Mineral Resources Research and Development Association(DY135-N1-1-05)Science&Technology Project of Zhoushan city of Zhejiang Province(2019C42271,2019C33205).
文摘It has remained a hard nut for years to segment sonar images of jacket installation environment,most of which are noisy images with inevitable blur after noise reduction.For the purpose of solutions to this problem,a fast segmen-tation algorithm is proposed on the basis of the gray value characteristics of sonar images.This algorithm is endowed with the advantage in no need of segmentation thresholds.To realize this goal,we follow the undermentioned steps:first,calcu-late the gray matrix of the fuzzy image background.After adjusting the gray value,the image is divided into three regions:background region,buffer region and target regions.Afterfiltering,we reset the pixels with gray value lower than 255 to binarize images and eliminate most artifacts.Finally,the remaining noise is removed by morphological processing.The simulation results of several sonar images show that the algorithm can segment the fuzzy sonar images quickly and effectively.Thus,the stable and feasible method is testified.
文摘Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions require complex networks with a large number of parameters.It is computationally expensive and results in high requirements on equipment,making it hard to deploy the network in hospitals.In this work,we propose a method for network lightweighting and applied it to a 3D CNN based network.We experimented on a COVID-19 lesion segmentation dataset.Specifically,we use three cascaded one-dimensional convolutions to replace a 3D convolution,and integrate instance normalization with the previous layer of one-dimensional convolutions to accelerate network inference.In addition,we simplify test-time augmentation and deep supervision of the network.Experiments show that the lightweight network can reduce the prediction time of each sample and the memory usage by 50%and reduce the number of parameters by 60%compared with the original network.The training time of one epoch is also reduced by 50%with the segmentation accuracy dropped within the acceptable range.
基金financially supported by the National Key Research and Development Program(Grant No.2022YFE0107000)the General Projects of the National Natural Science Foundation of China(Grant No.52171259)the High-Tech Ship Research Project of the Ministry of Industry and Information Technology(Grant No.[2021]342)。
文摘Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.
基金the China Postdoctoral Science Foundation under Grant 2021M701838the Natural Science Foundation of Hainan Province of China under Grants 621MS042 and 622MS067the Hainan Medical University Teaching Achievement Award Cultivation under Grant HYjcpx202209.
文摘Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.
基金the National Science and Technology Council of Taiwan under Grant NSTC 112-2221-E-130-005.
文摘This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems.The primary goal is to tackle issues related to recognizing objects with low brightness levels.In this study,the Intel RealSense Lidar Camera L515 is used to simultaneously capture color information and 16-bit depth information images.The detection scenarios are categorized into normal brightness and low brightness situations.When the system determines a normal brightness environment,normal brightness images are recognized using deep learning methods.In low-brightness situations,three methods are proposed for recognition.The first method is the SegmentationwithDepth image(SD)methodwhich involves segmenting the depth image,creating amask from the segmented depth image,mapping the obtained mask onto the true color(RGB)image to obtain a backgroundreduced RGB image,and recognizing the segmented image.The second method is theHDVmethod(hue,depth,value)which combines RGB images converted to HSV images(hue,saturation,value)with depth images D to form HDV images for recognition.The third method is the HSD(hue,saturation,depth)method which similarly combines RGB images converted to HSV images with depth images D to form HSD images for recognition.In experimental results,in normal brightness environments,the average recognition rate obtained using image recognition methods is 91%.For low-brightness environments,using the SD method with original images for training and segmented images for recognition achieves an average recognition rate of over 82%.TheHDVmethod achieves an average recognition rate of over 70%,while the HSD method achieves an average recognition rate of over 84%.The HSD method allows for a quick and convenient low-light object recognition system.This research outcome can be applied to nighttime surveillance systems or nighttime road safety systems.
基金funded by the Chongqing Normal University Startup Foundation for PhD(22XLB021)supported by the Open Research Project of the State Key Laboratory of Industrial Control Technology,Zhejiang University,China(No.ICT2023B40).
文摘Cloud detection from satellite and drone imagery is crucial for applications such as weather forecasting and environmentalmonitoring.Addressing the limitations of conventional convolutional neural networks,we propose an innovative transformer-based method.This method leverages transformers,which are adept at processing data sequences,to enhance cloud detection accuracy.Additionally,we introduce a Cyclic Refinement Architecture that improves the resolution and quality of feature extraction,thereby aiding in the retention of critical details often lost during cloud detection.Our extensive experimental validation shows that our approach significantly outperforms established models,excelling in high-resolution feature extraction and precise cloud segmentation.By integrating Positional Visual Transformers(PVT)with this architecture,our method advances high-resolution feature delineation and segmentation accuracy.Ultimately,our research offers a novel perspective for surmounting traditional challenges in cloud detection and contributes to the advancement of precise and dependable image analysis across various domains.
基金This work is supported by Natural Science Foundation of Anhui under Grant 1908085MF207,KJ2020A1215,KJ2021A1251 and 2023AH052856the Excellent Youth Talent Support Foundation of Anhui underGrant gxyqZD2021142the Quality Engineering Project of Anhui under Grant 2021jyxm1117,2021kcszsfkc307,2022xsxx158 and 2022jcbs043.
文摘To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cauchy mutation.First,Sin chaos is introduced to improve the random population initialization scheme of the CHOA,which not only guarantees the diversity of the population,but also enhances the distribution uniformity of the initial population.Next,Cauchy mutation is added to optimize the global search ability of the CHOA in the process of position(threshold)updating to avoid the CHOA falling into local optima.Finally,an improved CHOA was formed through the combination of chaos initialization and Cauchy mutation(CICMCHOA),then taking fuzzy Kapur as the objective function,this paper applied CICMCHOA to natural and medical image segmentation,and compared it with four algorithms,including the improved Satin Bowerbird optimizer(ISBO),Cuckoo Search(ICS),etc.The experimental results deriving from visual and specific indicators demonstrate that CICMCHOA delivers superior segmentation effects in image segmentation.
基金Hubei Provincial Natural Science Foundation of China,Grant/Award Number:2022CFA055National Natural Science Foundation of China,Grant/Award Number:62176097。
文摘Since the fully convolutional network has achieved great success in semantic segmentation,lots of works have been proposed to extract discriminative pixel representations.However,the authors observe that existing methods still suffer from two typical challenges:(i)The intra-class feature variation between different scenes may be large,leading to the difficulty in maintaining the consistency between same-class pixels from different scenes;(ii)The inter-class feature distinction in the same scene could be small,resulting in the limited performance to distinguish different classes in each scene.The authors first rethink se-mantic segmentation from a perspective of similarity between pixels and class centers.Each weight vector of the segmentation head represents its corresponding semantic class in the whole dataset,which can be regarded as the embedding of the class center.Thus,the pixel-wise classification amounts to computing similarity in the final feature space between pixels and the class centers.Under this novel view,the authors propose a Class Center Similarity(CCS)layer to address the above-mentioned challenges by generating adaptive class centers conditioned on each scenes and supervising the similarities between class centers.The CCS layer utilises the Adaptive Class Center Module to generate class centers conditioned on each scene,which adapt the large intra-class variation between different scenes.Specially designed Class Distance Loss(CD Loss)is introduced to control both inter-class and intra-class distances based on the predicted center-to-center and pixel-to-center similarity.Finally,the CCS layer outputs the processed pixel-to-center similarity as the segmentation prediction.Extensive experiments demonstrate that our model performs favourably against the state-of-the-art methods.
基金supported in part by the National Natural Science Foundation of China(Grant No.62062003)Natural Science Foundation of Ningxia(Grant No.2023AAC03293).
文摘Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesion anatomical and functional information effectively and improve the network segmentation performance are key questions.To solve the problem,the Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network(Guide-YNet)is proposed in this paper.Firstly,a double-encoder single-decoder U-Net is used as the backbone in this model,a single-coder single-decoder U-Net is used to generate the saliency guided feature using PET image and transmit it into the skip connection of the backbone,and the high sensitivity of PET images to tumors is used to guide the network to accurately locate lesions.Secondly,a Cross Scale Feature Enhancement Module(CSFEM)is designed to extract multi-scale fusion features after downsampling.Thirdly,a Cross-Layer Interactive Feature Enhancement Module(CIFEM)is designed in the encoder to enhance the spatial position information and semantic information.Finally,a Cross-Dimension Cross-Layer Feature Enhancement Module(CCFEM)is proposed in the decoder,which effectively extractsmultimodal image features through global attention and multi-dimension local attention.The proposed method is verified on the lung multimodal medical image datasets,and the results showthat theMean Intersection overUnion(MIoU),Accuracy(Acc),Dice Similarity Coefficient(Dice),Volumetric overlap error(Voe),Relative volume difference(Rvd)of the proposed method on lung lesion segmentation are 87.27%,93.08%,97.77%,95.92%,89.28%,and 88.68%,respectively.It is of great significance for computer-aided diagnosis.
基金Swiss National Science Foundation,Grant/Award Number:SNSF 320030_176052Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung,Grant/Award Number:320030_176052。
文摘Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as it enables accurate diagnosis,treatment planning,and monitoring of various diseases and conditions.Due to the lack of sufficient medical images,it is challenging to achieve an accurate segmentation,especially with the application of deep learning networks.The aim of this work is to study transfer learning from T1-weighted(T1-w)to T2-weighted(T2-w)MR sequences to enhance bone segmentation with minimal required computation resources.With the use of an excitation-based convolutional neural networks,four transfer learning mechanisms are proposed:transfer learning without fine tuning,open fine tuning,conservative fine tuning,and hybrid transfer learning.Moreover,a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique.The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources.The segmentation results are evaluated using 14 clinical 3D brain MR and CT images.The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393±0.0007.Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation,it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.
基金the PID2022‐137451OB‐I00 and PID2022‐137629OA‐I00 projects funded by the MICIU/AEIAEI/10.13039/501100011033 and by ERDF/EU.
文摘Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.
基金sponsored by Guangdong Basic and Applied Basic Research Foundation under Grant No.2021A1515110680Guangzhou Basic and Applied Basic Research under Grant No.202102020340.
文摘In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussian kernel(GA-LRBF)for spatial discretization.Compared to the standard radial basis functionmethod,this approach consumes less CPU time and maintains good stability because it uses only a small subset of points in the whole computational domain.Additionally,since the Gaussian function has the property of dimensional separation,the GA-LRBF method is suitable for dealing with isotropic images.Finally,a numerical scheme that couples GA-LRBF with the fourth-order Runge–Kutta method is applied to the C-V model,and a comparison of some numerical results demonstrates that this scheme achieves much more reliable image segmentation.
基金supported in part by the National Key Research and Development Program of China(Grant No.2019YFA0706200).
文摘In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the rapid development of artificial intelligence,semantic communication has attracted great attention as a new communication paradigm.However,for IoT devices,however,processing image information efficiently in real time is an essential task for the rapid transmission of semantic information.With the increase of model parameters in deep learning methods,the model inference time in sensor devices continues to increase.In contrast,the Pulse Coupled Neural Network(PCNN)has fewer parameters,making it more suitable for processing real-time scene tasks such as image segmentation,which lays the foundation for real-time,effective,and accurate image transmission.However,the parameters of PCNN are determined by trial and error,which limits its application.To overcome this limitation,an Improved Pulse Coupled Neural Networks(IPCNN)model is proposed in this work.The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons,and all its parameters are set adaptively,which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images.Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets.The IPCNN method achieves a better segmentation result without training,providing a new solution for the real-time transmission of image semantic information.
基金supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606supported by a DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowship+1 种基金supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20170668PRD1 and 20210213ERsupported by the NGA under Contract No.HM04762110003.
文摘Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.
基金supported by Korea Institute for Advancement of Technology(KIAT):P0017123,the Competency Development Program for Industry Specialist.
文摘The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional image segmentation methods often struggle to capture fine details such as edges and contours,limiting their effectiveness in identifying areas prone to energy loss.To address this challenge,we propose a novel segmentation methodology that combines object-wise processing with a two-stage deep learning model,Cascade U-Net.Object-wise processing isolates components of the facade,such as walls and windows,for independent analysis,while Cascade U-Net incorporates contour information to enhance segmentation accuracy.The methodology involves four steps:object isolation,which crops and adjusts the image based on bounding boxes;contour extraction,which derives contours;image segmentation,which modifies and reuses contours as guide data in Cascade U-Net to segment areas;and segmentation synthesis,which integrates the results obtained for each object to produce the final segmentation map.Applied to a dataset of Korean building images,the proposed method significantly outperformed traditional models,demonstrating improved accuracy and the ability to preserve critical structural details.Furthermore,we applied this approach to classify window thermal loss in real-world scenarios using infrared images,showing its potential to identify windows vulnerable to energy loss.Notably,our Cascade U-Net,which builds upon the relatively lightweight U-Net architecture,also exhibited strong performance,reinforcing the practical value of this method.Our approach offers a practical solution for enhancing energy efficiency in buildings by providing more precise segmentation results.
文摘Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.