Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligenc...Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligence(AI)showed outstanding performance in effectively diagnosing this virus in real-time.Computed tomography is a complementary diagnostic tool to clarify the damage of COVID-19 in the lungs even before symptoms appear in patients.This paper conducts a systematic literature review of deep learning methods for classifying the segmentation of COVID-19 infection in the lungs.We used the methodology of systematic reviews and meta-analyses(PRISMA)flow method.This research aims to systematically analyze the supervised deep learning methods,open resource datasets,data augmentation methods,and loss functions used for various segment shapes of COVID-19 infection from computerized tomography(CT)chest images.We have selected 56 primary studies relevant to the topic of the paper.We have compared different aspects of the algorithms used to segment infected areas in the CT images.Limitations to deep learning in the segmentation of infected areas still need to be developed to predict smaller regions of infection at the beginning of their appearance.展开更多
AIM:To develop a deep learning-based model for automatic retinal vascular segmentation,analyzing and comparing parameters under diverse glucose metabolic status(normal,prediabetes,diabetes)and to assess the potential ...AIM:To develop a deep learning-based model for automatic retinal vascular segmentation,analyzing and comparing parameters under diverse glucose metabolic status(normal,prediabetes,diabetes)and to assess the potential of artificial intelligence(AI)in image segmentation and retinal vascular parameters for predicting prediabetes and diabetes.METHODS:Retinal fundus photos from 200 normal individuals,200 prediabetic patients,and 200 diabetic patients(600 eyes in total)were used.The U-Net network served as the foundational architecture for retinal arteryvein segmentation.An automatic segmentation and evaluation system for retinal vascular parameters was trained,encompassing 26 parameters.RESULTS:Significant differences were found in retinal vascular parameters across normal,prediabetes,and diabetes groups,including artery diameter(P=0.008),fractal dimension(P=0.000),vein curvature(P=0.003),C-zone artery branching vessel count(P=0.049),C-zone vein branching vessel count(P=0.041),artery branching angle(P=0.005),vein branching angle(P=0.001),artery angle asymmetry degree(P=0.003),vessel length density(P=0.000),and vessel area density(P=0.000),totaling 10 parameters.CONCLUSION:The deep learning-based model facilitates retinal vascular parameter identification and quantification,revealing significant differences.These parameters exhibit potential as biomarkers for prediabetes and diabetes.展开更多
When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in inco...When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in incomplete road extraction and low accuracy.We propose the introduction of spatial and channel attention modules to the convolutional neural network ConvNeXt.Then,ConvNeXt is used as the backbone network,which cooperates with the perceptual analysis network UPerNet,retains the detection head of the semantic segmentation,and builds a new model ConvNeXt-UPerNet to suppress noise interference.Training on the open-source DeepGlobe and CHN6-CUG datasets and introducing the DiceLoss on the basis of CrossEntropyLoss solves the problem of positive and negative sample imbalance.Experimental results show that the new network model can achieve the following performance on the DeepGlobe dataset:79.40%for precision(Pre),97.93% for accuracy(Acc),69.28% for intersection over union(IoU),and 83.56% for mean intersection over union(MIoU).On the CHN6-CUG dataset,the model achieves the respective values of 78.17%for Pre,97.63%for Acc,65.4% for IoU,and 81.46% for MIoU.Compared with other network models,the fused ConvNeXt-UPerNet model can extract road information better when faced with the influence of noise contained in high-resolution remote sensing images.It also achieves multiscale image feature information with unified perception,ultimately improving the generalization ability of deep learning technology in extracting complex roads from high-resolution remote sensing images.展开更多
Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical ima...Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical imaging. With the recent advances in deep learning (DL) and itsconfounding results in image segmentation, more attention has been drawnto its use in medical image segmentation. This article introduces a surveyof the state-of-the-art deep convolution neural network (CNN) models andmechanisms utilized in image segmentation. First, segmentation models arecategorized based on their model architecture and primary working principle.Then, CNN categories are described, and various models are discussed withineach category. Compared with other existing surveys, several applicationswith multiple architectural adaptations are discussed within each category.A comparative summary is included to give the reader insights into utilizedarchitectures in different applications and datasets. This study focuses onmedical image segmentation applications, where the most widely used architecturesare illustrated, and other promising models are suggested that haveproven their success in different domains. Finally, the present work discussescurrent limitations and solutions along with future trends in the field.展开更多
This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automat...This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automated nuclei segmentation due to the diversity of nuclei structures that arise from differences in tissue types and staining protocols,as well as the segmentation of variable-sized and overlapping nuclei.To this extent,the approach proposed in this study uses an ensemble of the UNet architecture with various Convolutional Neural Networks(CNN)architectures as encoder backbones,along with stain normalization and test time augmentation,to improve segmentation accuracy.Additionally,this paper employs a Structure-Preserving Color Normalization(SPCN)technique as a preprocessing step for stain normalization.The proposed model was trained and tested on both single-organ and multi-organ datasets,yielding an F1 score of 84.11%,mean Intersection over Union(IoU)of 81.67%,dice score of 84.11%,accuracy of 92.58%and precision of 83.78%on the multi-organ dataset,and an F1 score of 87.04%,mean IoU of 86.66%,dice score of 87.04%,accuracy of 96.69%and precision of 87.57%on the single-organ dataset.These findings demonstrate that the proposed model ensemble coupled with the right pre-processing and post-processing techniques enhances nuclei segmentation capabilities.展开更多
Neurons can be abstractly represented as skeletons due to the filament nature of neurites.With the rapid development of imaging and image analysis techniques,an increasing amount of neuron skeleton data is being produ...Neurons can be abstractly represented as skeletons due to the filament nature of neurites.With the rapid development of imaging and image analysis techniques,an increasing amount of neuron skeleton data is being produced.In some scienti fic studies,it is necessary to dissect the axons and dendrites,which is typically done manually and is both tedious and time-consuming.To automate this process,we have developed a method that relies solely on neuronal skeletons using Geometric Deep Learning(GDL).We demonstrate the effectiveness of this method using pyramidal neurons in mammalian brains,and the results are promising for its application in neuroscience studies.展开更多
In the shape analysis community,decomposing a 3D shape intomeaningful parts has become a topic of interest.3D model segmentation is largely used in tasks such as shape deformation,shape partial matching,skeleton extra...In the shape analysis community,decomposing a 3D shape intomeaningful parts has become a topic of interest.3D model segmentation is largely used in tasks such as shape deformation,shape partial matching,skeleton extraction,shape correspondence,shape annotation and texture mapping.Numerous approaches have attempted to provide better segmentation solutions;however,the majority of the previous techniques used handcrafted features,which are usually focused on a particular attribute of 3Dobjects and so are difficult to generalize.In this paper,we propose a three-stage approach for using Multi-view recurrent neural network to automatically segment a 3D shape into visually meaningful sub-meshes.The first stage involves normalizing and scaling a 3D model to fit within the unit sphere and rendering the object into different views.Contrasting viewpoints,on the other hand,might not have been associated,and a 3D region could correlate into totally distinct outcomes depending on the viewpoint.To address this,we ran each view through(shared weights)CNN and Bolster block in order to create a probability boundary map.The Bolster block simulates the area relationships between different views,which helps to improve and refine the data.In stage two,the feature maps generated in the previous step are correlated using a Recurrent Neural network to obtain compatible fine detail responses for each view.Finally,a layer that is fully connected is used to return coherent edges,which are then back project to 3D objects to produce the final segmentation.Experiments on the Princeton Segmentation Benchmark dataset show that our proposed method is effective for mesh segmentation tasks.展开更多
This research investigates deep learning-based approach for defect detection in the steel production using Severstal steel dataset. The developed system integrates DenseNet121 for classification and DeepLabV3 for segm...This research investigates deep learning-based approach for defect detection in the steel production using Severstal steel dataset. The developed system integrates DenseNet121 for classification and DeepLabV3 for segmentation. DenseNet121 achieved high accuracy with defect classification as it achieved 92.34% accuracy during testing. This model significantly outperformed benchmark models like VGG16 and ResNet50, which achieved 72.59% and 92.01% accuracy, respectively. Similarly, for segmentation, DeepLabV3 showed high performance in localizing and categorizing defects, achieving a Dice coefficient of 84.21% during training and 69.77% during validation. The dataset includes steels which have four different types of defects and the DeepLab model was particularly effective with detection of Defect 4, with a Dice coefficient of 87.69% in testing. The model performs suboptimally in segmentation of Defect 1, achieving an accuracy of 64.81%. The overall system’s integration of classification and segmentation, alongside thresholding techniques, resulted in improved precision (92.31%) and reduced false positives. Overall, the proposed deep learning system achieved superior defect detection accuracy and reliability compared to existing models in the literature.展开更多
Image semantic segmentation is an important branch of computer vision of a wide variety of practical applications such as medical image analysis,autonomous driving,virtual or augmented reality,etc.In recent years,due ...Image semantic segmentation is an important branch of computer vision of a wide variety of practical applications such as medical image analysis,autonomous driving,virtual or augmented reality,etc.In recent years,due to the remarkable performance of transformer and multilayer perceptron(MLP)in computer vision,which is equivalent to convolutional neural network(CNN),there has been a substantial amount of image semantic segmentation works aimed at developing different types of deep learning architecture.This survey aims to provide a comprehensive overview of deep learning methods in the field of general image semantic segmentation.Firstly,the commonly used image segmentation datasets are listed.Next,extensive pioneering works are deeply studied from multiple perspectives(e.g.,network structures,feature fusion methods,attention mechanisms),and are divided into four categories according to different network architectures:CNN-based architectures,transformer-based architectures,MLP-based architectures,and others.Furthermore,this paper presents some common evaluation metrics and compares the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value on the most widely used datasets.Finally,possible future research directions and challenges are discussed for the reference of other researchers.展开更多
In recent times,Internet of Things(IoT)and Deep Learning(DL)mod-els have revolutionized the diagnostic procedures of Diabetic Retinopathy(DR)in its early stages that can save the patient from vision loss.At the same t...In recent times,Internet of Things(IoT)and Deep Learning(DL)mod-els have revolutionized the diagnostic procedures of Diabetic Retinopathy(DR)in its early stages that can save the patient from vision loss.At the same time,the recent advancements made in Machine Learning(ML)and DL models help in developing Computer Aided Diagnosis(CAD)models for DR recognition and grading.In this background,the current research works designs and develops an IoT-enabled Effective Neutrosophic based Segmentation with Optimal Deep Belief Network(ODBN)model i.e.,NS-ODBN model for diagnosis of DR.The presented model involves Interval Neutrosophic Set(INS)technique to dis-tinguish the diseased areas in fundus image.In addition,three feature extraction techniques such as histogram features,texture features,and wavelet features are used in this study.Besides,Optimal Deep Belief Network(ODBN)model is utilized as a classification model for DR.ODBN model involves Shuffled Shepherd Optimization(SSO)algorithm to regulate the hyperparameters of DBN technique in an optimal manner.The utilization of SSO algorithm in DBN model helps in increasing the detection performance of the model significantly.The presented technique was experimentally evaluated using benchmark DR dataset and the results were validated under different evaluation metrics.The resultant values infer that the proposed INS-ODBN technique is a promising candidate than other existing techniques.展开更多
The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions va...The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.展开更多
Sofar,slope collapse detectionmainlydepends onmanpower,whichhas the followingdrawbacks:(1)lowreliability,(2)high risk of human safe,(3)high labor cost.To improve the efficiency and reduce the human investment of slope...Sofar,slope collapse detectionmainlydepends onmanpower,whichhas the followingdrawbacks:(1)lowreliability,(2)high risk of human safe,(3)high labor cost.To improve the efficiency and reduce the human investment of slope collapse detection,this paper proposes an intelligent detection method based on deep learning technology for the task.In thismethod,we first use the deep learning-based image segmentation technology to find the slope area from the captured scene image.Then the foreground motion detection method is used for detecting the motion of the slope area.Finally,we design a lightweight convolutional neural network with an attentionmechanismto recognize the detected motion object,thus eliminating the interference motion and increasing the detection accuracy rate.Experimental results on the artificial data and relevant scene data show that the proposed detection method can effectively identify the slope collapse,which has its applicative value and brilliant prospect.展开更多
Many existing techniques to acquire dual-energy X-ray absorptiometry(DXA)images are unable to accurately distinguish between bone and soft tissue.For the most part,this failure stems from bone shape variability,noise ...Many existing techniques to acquire dual-energy X-ray absorptiometry(DXA)images are unable to accurately distinguish between bone and soft tissue.For the most part,this failure stems from bone shape variability,noise and low contrast in DXA images,inconsistent X-ray beam penetration producing shadowing effects,and person-to-person variations.This work explores the feasibility of using state-of-the-art deep learning semantic segmentation models,fully convolutional networks(FCNs),SegNet,and U-Net to distinguish femur bone from soft tissue.We investigated the performance of deep learning algorithms with reference to some of our previously applied conventional image segmentation techniques(i.e.,a decision-tree-based method using a pixel label decision tree[PLDT]and another method using Otsu’s thresholding)for femur DXA images,and we measured accuracy based on the average Jaccard index,sensitivity,and specificity.Deep learning models using SegNet,U-Net,and an FCN achieved average segmentation accuracies of 95.8%,95.1%,and 97.6%,respectively,compared to PLDT(91.4%)and Otsu’s thresholding(72.6%).Thus we conclude that an FCN outperforms other deep learning and conventional techniques when segmenting femur bone from soft tissue in DXA images.Accurate femur segmentation improves bone mineral density computation,which in turn enhances the diagnosing of osteoporosis.展开更多
Many plant species have a startling degree of morphological similarity,making it difficult to split and categorize them reliably.Unknown plant species can be challenging to classify and segment using deep learning.Whi...Many plant species have a startling degree of morphological similarity,making it difficult to split and categorize them reliably.Unknown plant species can be challenging to classify and segment using deep learning.While using deep learning architectures has helped improve classification accuracy,the resulting models often need to be more flexible and require a large dataset to train.For the sake of taxonomy,this research proposes a hybrid method for categorizing guava,potato,and java plumleaves.Two new approaches are used to formthe hybridmodel suggested here.The guava,potato,and java plum plant species have been successfully segmented using the first model built on the MobileNetV2-UNET architecture.As a second model,we use a Plant Species Detection Stacking Ensemble Deep Learning Model(PSD-SE-DLM)to identify potatoes,java plums,and guava.The proposed models were trained using data collected in Punjab,Pakistan,consisting of images of healthy and sick leaves from guava,java plum,and potatoes.These datasets are known as PLSD and PLSSD.Accuracy levels of 99.84%and 96.38%were achieved for the suggested PSD-SE-DLM and MobileNetV2-UNET models,respectively.展开更多
In an urban city,the daily challenges of managing cleanliness are the primary aspect of routine life,which requires a large number of resources,the manual process of labour,and budget.Street cleaning techniques includ...In an urban city,the daily challenges of managing cleanliness are the primary aspect of routine life,which requires a large number of resources,the manual process of labour,and budget.Street cleaning techniques include street sweepers going away to different metropolitan areas,manually verifying if the street required cleaning taking action.This research presents novel street garbage recognizing robotic navigation techniques by detecting the city’s street-level images and multi-level segmentation.For the large volume of the process,the deep learning-based methods can be better to achieve a high level of classifica-tion,object detection,and accuracy than other learning algorithms.The proposed Histogram of Oriented Gradients(HOG)is used to features extracted while using the deep learning technique to classify the ground-level segmentation process’s images.In this paper,we use mobile edge computing to process street images in advance andfilter out pictures that meet our needs,which significantly affect recognition efficiency.To measure the urban streets’cleanliness,our street clean-liness assessment approach provides a multi-level assessment model across differ-ent layers.Besides,with ground-level segmentation using a deep neural network,a novel navigation strategy is proposed for robotic classification.Single Shot Mul-tiBox Detector(SSD)approaches the output space of bounding boxes into a set of default boxes over different feature ratios and scales per attribute map location from the dataset.The SSD can classify and detect the garbage’s accurately and autonomously by using deep learning for garbage recognition.Experimental results show that accurate street garbage detection and navigation can reach approximately the same cleaning effectiveness as traditional methods.展开更多
An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification...An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification of GI abnormalities by deep learning.The first bleeding region is segmented using a hybrid approach.The threshold is applied to each channel extracted from the original RGB image.Later,all channels are merged through mutual information and pixel-based techniques.As a result,the image is segmented.Texture and deep learning features are extracted in the proposed classification task.The transfer learning(TL)approach is used for the extraction of deep features.The Local Binary Pattern(LBP)method is used for texture features.Later,an entropy-based feature selection approach is implemented to select the best features of both deep learning and texture vectors.The selected optimal features are combined with a serial-based technique and the resulting vector is fed to the Ensemble Learning Classifier.The experimental process is evaluated on the basis of two datasets:Private and KVASIR.The accuracy achieved is 99.8 per cent for the private data set and 86.4 percent for the KVASIR data set.It can be confirmed that the proposed method is effective in detecting and classifying GI abnormalities and exceeds other methods of comparison.展开更多
Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of ...Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.展开更多
Automatic segmentation of the liver and hepatic lesions from abdominal 3D comput-ed tomography(CT)images is fundamental tasks in computer-assisted liver surgery planning.However,due to complex backgrounds,ambiguous bo...Automatic segmentation of the liver and hepatic lesions from abdominal 3D comput-ed tomography(CT)images is fundamental tasks in computer-assisted liver surgery planning.However,due to complex backgrounds,ambiguous boundaries,heterogeneous appearances and highly varied shapes of the liver,accurate liver segmentation and tumor detection are stil-1 challenging problems.To address these difficulties,we propose an automatic segmentation framework based on 3D U-net with dense connections and globally optimized refinement.First-ly,a deep U-net architecture with dense connections is trained to learn the probability map of the liver.Then the probability map goes into the following refinement step as the initial surface and prior shape.The segmentation of liver tumor is based on the similar network architecture with the help of segmentation results of liver.In order to reduce the infuence of the surrounding tissues with the similar intensity and texture behavior with the tumor region,during the training procedure,I x liverlabel is the input of the network for the segmentation of liver tumor.By do-ing this,the accuracy of segmentation can be improved.The proposed method is fully automatic without any user interaction.Both qualitative and quantitative results reveal that the pro-posed approach is efficient and accurate for liver volume estimation in clinical application.The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and non-reproducible manual segmentation method.展开更多
Accurate segmentation of CT images of liver tumors is an important adjunct for the liver diagnosis and treatment of liver diseases.In recent years,due to the great improvement of hard device,many deep learning based m...Accurate segmentation of CT images of liver tumors is an important adjunct for the liver diagnosis and treatment of liver diseases.In recent years,due to the great improvement of hard device,many deep learning based methods have been proposed for automatic liver segmentation.Among them,there are the plain neural network headed by FCN and the residual neural network headed by Resnet,both of which have many variations.They have achieved certain achievements in medical image segmentation.In this paper,we firstly select five representative structures,i.e.,FCN,U-Net,Segnet,Resnet and Densenet,to investigate their performance on liver segmentation.Since original Resnet and Densenet could not perform image segmentation directly,we make some adjustments for them to perform live segmentation.Our experimental results show that Densenet performs the best on liver segmentation,followed by Resnet.Both perform much better than Segnet,U-Net,and FCN.Among Segnet,U-Net,and FCN,U-Net performs the best,followed by Segnet.FCN performs the worst.展开更多
The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernet...The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel.展开更多
文摘Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligence(AI)showed outstanding performance in effectively diagnosing this virus in real-time.Computed tomography is a complementary diagnostic tool to clarify the damage of COVID-19 in the lungs even before symptoms appear in patients.This paper conducts a systematic literature review of deep learning methods for classifying the segmentation of COVID-19 infection in the lungs.We used the methodology of systematic reviews and meta-analyses(PRISMA)flow method.This research aims to systematically analyze the supervised deep learning methods,open resource datasets,data augmentation methods,and loss functions used for various segment shapes of COVID-19 infection from computerized tomography(CT)chest images.We have selected 56 primary studies relevant to the topic of the paper.We have compared different aspects of the algorithms used to segment infected areas in the CT images.Limitations to deep learning in the segmentation of infected areas still need to be developed to predict smaller regions of infection at the beginning of their appearance.
基金Supported by Shenzhen Science and Technology Program(No.JCYJ20220530153604010).
文摘AIM:To develop a deep learning-based model for automatic retinal vascular segmentation,analyzing and comparing parameters under diverse glucose metabolic status(normal,prediabetes,diabetes)and to assess the potential of artificial intelligence(AI)in image segmentation and retinal vascular parameters for predicting prediabetes and diabetes.METHODS:Retinal fundus photos from 200 normal individuals,200 prediabetic patients,and 200 diabetic patients(600 eyes in total)were used.The U-Net network served as the foundational architecture for retinal arteryvein segmentation.An automatic segmentation and evaluation system for retinal vascular parameters was trained,encompassing 26 parameters.RESULTS:Significant differences were found in retinal vascular parameters across normal,prediabetes,and diabetes groups,including artery diameter(P=0.008),fractal dimension(P=0.000),vein curvature(P=0.003),C-zone artery branching vessel count(P=0.049),C-zone vein branching vessel count(P=0.041),artery branching angle(P=0.005),vein branching angle(P=0.001),artery angle asymmetry degree(P=0.003),vessel length density(P=0.000),and vessel area density(P=0.000),totaling 10 parameters.CONCLUSION:The deep learning-based model facilitates retinal vascular parameter identification and quantification,revealing significant differences.These parameters exhibit potential as biomarkers for prediabetes and diabetes.
基金This work was supported in part by the Key Project of Natural Science Research of Anhui Provincial Department of Education under Grant KJ2017A416in part by the Fund of National Sensor Network Engineering Technology Research Center(No.NSNC202103).
文摘When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in incomplete road extraction and low accuracy.We propose the introduction of spatial and channel attention modules to the convolutional neural network ConvNeXt.Then,ConvNeXt is used as the backbone network,which cooperates with the perceptual analysis network UPerNet,retains the detection head of the semantic segmentation,and builds a new model ConvNeXt-UPerNet to suppress noise interference.Training on the open-source DeepGlobe and CHN6-CUG datasets and introducing the DiceLoss on the basis of CrossEntropyLoss solves the problem of positive and negative sample imbalance.Experimental results show that the new network model can achieve the following performance on the DeepGlobe dataset:79.40%for precision(Pre),97.93% for accuracy(Acc),69.28% for intersection over union(IoU),and 83.56% for mean intersection over union(MIoU).On the CHN6-CUG dataset,the model achieves the respective values of 78.17%for Pre,97.63%for Acc,65.4% for IoU,and 81.46% for MIoU.Compared with other network models,the fused ConvNeXt-UPerNet model can extract road information better when faced with the influence of noise contained in high-resolution remote sensing images.It also achieves multiscale image feature information with unified perception,ultimately improving the generalization ability of deep learning technology in extracting complex roads from high-resolution remote sensing images.
基金supported by the Information Technology Industry Development Agency (ITIDA),Egypt (Project No.CFP181).
文摘Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical imaging. With the recent advances in deep learning (DL) and itsconfounding results in image segmentation, more attention has been drawnto its use in medical image segmentation. This article introduces a surveyof the state-of-the-art deep convolution neural network (CNN) models andmechanisms utilized in image segmentation. First, segmentation models arecategorized based on their model architecture and primary working principle.Then, CNN categories are described, and various models are discussed withineach category. Compared with other existing surveys, several applicationswith multiple architectural adaptations are discussed within each category.A comparative summary is included to give the reader insights into utilizedarchitectures in different applications and datasets. This study focuses onmedical image segmentation applications, where the most widely used architecturesare illustrated, and other promising models are suggested that haveproven their success in different domains. Finally, the present work discussescurrent limitations and solutions along with future trends in the field.
文摘This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automated nuclei segmentation due to the diversity of nuclei structures that arise from differences in tissue types and staining protocols,as well as the segmentation of variable-sized and overlapping nuclei.To this extent,the approach proposed in this study uses an ensemble of the UNet architecture with various Convolutional Neural Networks(CNN)architectures as encoder backbones,along with stain normalization and test time augmentation,to improve segmentation accuracy.Additionally,this paper employs a Structure-Preserving Color Normalization(SPCN)technique as a preprocessing step for stain normalization.The proposed model was trained and tested on both single-organ and multi-organ datasets,yielding an F1 score of 84.11%,mean Intersection over Union(IoU)of 81.67%,dice score of 84.11%,accuracy of 92.58%and precision of 83.78%on the multi-organ dataset,and an F1 score of 87.04%,mean IoU of 86.66%,dice score of 87.04%,accuracy of 96.69%and precision of 87.57%on the single-organ dataset.These findings demonstrate that the proposed model ensemble coupled with the right pre-processing and post-processing techniques enhances nuclei segmentation capabilities.
基金supported by the Simons Foundation,the National Natural Science Foundation of China(No.NSFC61405038)the Fujian provincial fund(No.2020J01453).
文摘Neurons can be abstractly represented as skeletons due to the filament nature of neurites.With the rapid development of imaging and image analysis techniques,an increasing amount of neuron skeleton data is being produced.In some scienti fic studies,it is necessary to dissect the axons and dendrites,which is typically done manually and is both tedious and time-consuming.To automate this process,we have developed a method that relies solely on neuronal skeletons using Geometric Deep Learning(GDL).We demonstrate the effectiveness of this method using pyramidal neurons in mammalian brains,and the results are promising for its application in neuroscience studies.
基金supported by the National Natural Science Foundation of China (61671397).
文摘In the shape analysis community,decomposing a 3D shape intomeaningful parts has become a topic of interest.3D model segmentation is largely used in tasks such as shape deformation,shape partial matching,skeleton extraction,shape correspondence,shape annotation and texture mapping.Numerous approaches have attempted to provide better segmentation solutions;however,the majority of the previous techniques used handcrafted features,which are usually focused on a particular attribute of 3Dobjects and so are difficult to generalize.In this paper,we propose a three-stage approach for using Multi-view recurrent neural network to automatically segment a 3D shape into visually meaningful sub-meshes.The first stage involves normalizing and scaling a 3D model to fit within the unit sphere and rendering the object into different views.Contrasting viewpoints,on the other hand,might not have been associated,and a 3D region could correlate into totally distinct outcomes depending on the viewpoint.To address this,we ran each view through(shared weights)CNN and Bolster block in order to create a probability boundary map.The Bolster block simulates the area relationships between different views,which helps to improve and refine the data.In stage two,the feature maps generated in the previous step are correlated using a Recurrent Neural network to obtain compatible fine detail responses for each view.Finally,a layer that is fully connected is used to return coherent edges,which are then back project to 3D objects to produce the final segmentation.Experiments on the Princeton Segmentation Benchmark dataset show that our proposed method is effective for mesh segmentation tasks.
文摘This research investigates deep learning-based approach for defect detection in the steel production using Severstal steel dataset. The developed system integrates DenseNet121 for classification and DeepLabV3 for segmentation. DenseNet121 achieved high accuracy with defect classification as it achieved 92.34% accuracy during testing. This model significantly outperformed benchmark models like VGG16 and ResNet50, which achieved 72.59% and 92.01% accuracy, respectively. Similarly, for segmentation, DeepLabV3 showed high performance in localizing and categorizing defects, achieving a Dice coefficient of 84.21% during training and 69.77% during validation. The dataset includes steels which have four different types of defects and the DeepLab model was particularly effective with detection of Defect 4, with a Dice coefficient of 87.69% in testing. The model performs suboptimally in segmentation of Defect 1, achieving an accuracy of 64.81%. The overall system’s integration of classification and segmentation, alongside thresholding techniques, resulted in improved precision (92.31%) and reduced false positives. Overall, the proposed deep learning system achieved superior defect detection accuracy and reliability compared to existing models in the literature.
基金supported by the Major science and technology project of Hainan Province(Grant No.ZDKJ2020012)National Natural Science Foundation of China(Grant No.62162024 and 62162022)+1 种基金Key Projects in Hainan Province(Grant ZDYF2021GXJS003 and Grant ZDYF2020040)Graduate Innovation Project(Grant No.Qhys2021-187).
文摘Image semantic segmentation is an important branch of computer vision of a wide variety of practical applications such as medical image analysis,autonomous driving,virtual or augmented reality,etc.In recent years,due to the remarkable performance of transformer and multilayer perceptron(MLP)in computer vision,which is equivalent to convolutional neural network(CNN),there has been a substantial amount of image semantic segmentation works aimed at developing different types of deep learning architecture.This survey aims to provide a comprehensive overview of deep learning methods in the field of general image semantic segmentation.Firstly,the commonly used image segmentation datasets are listed.Next,extensive pioneering works are deeply studied from multiple perspectives(e.g.,network structures,feature fusion methods,attention mechanisms),and are divided into four categories according to different network architectures:CNN-based architectures,transformer-based architectures,MLP-based architectures,and others.Furthermore,this paper presents some common evaluation metrics and compares the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value on the most widely used datasets.Finally,possible future research directions and challenges are discussed for the reference of other researchers.
文摘In recent times,Internet of Things(IoT)and Deep Learning(DL)mod-els have revolutionized the diagnostic procedures of Diabetic Retinopathy(DR)in its early stages that can save the patient from vision loss.At the same time,the recent advancements made in Machine Learning(ML)and DL models help in developing Computer Aided Diagnosis(CAD)models for DR recognition and grading.In this background,the current research works designs and develops an IoT-enabled Effective Neutrosophic based Segmentation with Optimal Deep Belief Network(ODBN)model i.e.,NS-ODBN model for diagnosis of DR.The presented model involves Interval Neutrosophic Set(INS)technique to dis-tinguish the diseased areas in fundus image.In addition,three feature extraction techniques such as histogram features,texture features,and wavelet features are used in this study.Besides,Optimal Deep Belief Network(ODBN)model is utilized as a classification model for DR.ODBN model involves Shuffled Shepherd Optimization(SSO)algorithm to regulate the hyperparameters of DBN technique in an optimal manner.The utilization of SSO algorithm in DBN model helps in increasing the detection performance of the model significantly.The presented technique was experimentally evaluated using benchmark DR dataset and the results were validated under different evaluation metrics.The resultant values infer that the proposed INS-ODBN technique is a promising candidate than other existing techniques.
基金funded by the Open Foundation of Anhui EngineeringResearch Center of Intelligent Perception and Elderly Care,Chuzhou University(No.2022OPA03)the Higher EducationNatural Science Foundation of Anhui Province(No.KJ2021B01)and the Innovation Team Projects of Universities in Guangdong(No.2022KCXTD057).
文摘The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.
基金supported in part by the National Science Foundation of Guangxi Province under Grant 2021JJA170199and in part by the Research Project of Yellow River Engineer-ing Consulting with No.2021ky015.
文摘Sofar,slope collapse detectionmainlydepends onmanpower,whichhas the followingdrawbacks:(1)lowreliability,(2)high risk of human safe,(3)high labor cost.To improve the efficiency and reduce the human investment of slope collapse detection,this paper proposes an intelligent detection method based on deep learning technology for the task.In thismethod,we first use the deep learning-based image segmentation technology to find the slope area from the captured scene image.Then the foreground motion detection method is used for detecting the motion of the slope area.Finally,we design a lightweight convolutional neural network with an attentionmechanismto recognize the detected motion object,thus eliminating the interference motion and increasing the detection accuracy rate.Experimental results on the artificial data and relevant scene data show that the proposed detection method can effectively identify the slope collapse,which has its applicative value and brilliant prospect.
基金Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT[NRF-2017R1E1A1A01077717].
文摘Many existing techniques to acquire dual-energy X-ray absorptiometry(DXA)images are unable to accurately distinguish between bone and soft tissue.For the most part,this failure stems from bone shape variability,noise and low contrast in DXA images,inconsistent X-ray beam penetration producing shadowing effects,and person-to-person variations.This work explores the feasibility of using state-of-the-art deep learning semantic segmentation models,fully convolutional networks(FCNs),SegNet,and U-Net to distinguish femur bone from soft tissue.We investigated the performance of deep learning algorithms with reference to some of our previously applied conventional image segmentation techniques(i.e.,a decision-tree-based method using a pixel label decision tree[PLDT]and another method using Otsu’s thresholding)for femur DXA images,and we measured accuracy based on the average Jaccard index,sensitivity,and specificity.Deep learning models using SegNet,U-Net,and an FCN achieved average segmentation accuracies of 95.8%,95.1%,and 97.6%,respectively,compared to PLDT(91.4%)and Otsu’s thresholding(72.6%).Thus we conclude that an FCN outperforms other deep learning and conventional techniques when segmenting femur bone from soft tissue in DXA images.Accurate femur segmentation improves bone mineral density computation,which in turn enhances the diagnosing of osteoporosis.
基金funding this work through the Research Group Program under the Grant Number:(R.G.P.2/382/44).
文摘Many plant species have a startling degree of morphological similarity,making it difficult to split and categorize them reliably.Unknown plant species can be challenging to classify and segment using deep learning.While using deep learning architectures has helped improve classification accuracy,the resulting models often need to be more flexible and require a large dataset to train.For the sake of taxonomy,this research proposes a hybrid method for categorizing guava,potato,and java plumleaves.Two new approaches are used to formthe hybridmodel suggested here.The guava,potato,and java plum plant species have been successfully segmented using the first model built on the MobileNetV2-UNET architecture.As a second model,we use a Plant Species Detection Stacking Ensemble Deep Learning Model(PSD-SE-DLM)to identify potatoes,java plums,and guava.The proposed models were trained using data collected in Punjab,Pakistan,consisting of images of healthy and sick leaves from guava,java plum,and potatoes.These datasets are known as PLSD and PLSSD.Accuracy levels of 99.84%and 96.38%were achieved for the suggested PSD-SE-DLM and MobileNetV2-UNET models,respectively.
文摘In an urban city,the daily challenges of managing cleanliness are the primary aspect of routine life,which requires a large number of resources,the manual process of labour,and budget.Street cleaning techniques include street sweepers going away to different metropolitan areas,manually verifying if the street required cleaning taking action.This research presents novel street garbage recognizing robotic navigation techniques by detecting the city’s street-level images and multi-level segmentation.For the large volume of the process,the deep learning-based methods can be better to achieve a high level of classifica-tion,object detection,and accuracy than other learning algorithms.The proposed Histogram of Oriented Gradients(HOG)is used to features extracted while using the deep learning technique to classify the ground-level segmentation process’s images.In this paper,we use mobile edge computing to process street images in advance andfilter out pictures that meet our needs,which significantly affect recognition efficiency.To measure the urban streets’cleanliness,our street clean-liness assessment approach provides a multi-level assessment model across differ-ent layers.Besides,with ground-level segmentation using a deep neural network,a novel navigation strategy is proposed for robotic classification.Single Shot Mul-tiBox Detector(SSD)approaches the output space of bounding boxes into a set of default boxes over different feature ratios and scales per attribute map location from the dataset.The SSD can classify and detect the garbage’s accurately and autonomously by using deep learning for garbage recognition.Experimental results show that accurate street garbage detection and navigation can reach approximately the same cleaning effectiveness as traditional methods.
基金This research was financially supported in part by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D program.(Project No.P0016038)in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2016-0-00312)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification of GI abnormalities by deep learning.The first bleeding region is segmented using a hybrid approach.The threshold is applied to each channel extracted from the original RGB image.Later,all channels are merged through mutual information and pixel-based techniques.As a result,the image is segmented.Texture and deep learning features are extracted in the proposed classification task.The transfer learning(TL)approach is used for the extraction of deep features.The Local Binary Pattern(LBP)method is used for texture features.Later,an entropy-based feature selection approach is implemented to select the best features of both deep learning and texture vectors.The selected optimal features are combined with a serial-based technique and the resulting vector is fed to the Ensemble Learning Classifier.The experimental process is evaluated on the basis of two datasets:Private and KVASIR.The accuracy achieved is 99.8 per cent for the private data set and 86.4 percent for the KVASIR data set.It can be confirmed that the proposed method is effective in detecting and classifying GI abnormalities and exceeds other methods of comparison.
文摘Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.
基金Supported by the National Natural Science Foundation of China(12090020,12090025)Zhejiang Provin-cial Natural Science Foundation of China(LSD19H180005)。
文摘Automatic segmentation of the liver and hepatic lesions from abdominal 3D comput-ed tomography(CT)images is fundamental tasks in computer-assisted liver surgery planning.However,due to complex backgrounds,ambiguous boundaries,heterogeneous appearances and highly varied shapes of the liver,accurate liver segmentation and tumor detection are stil-1 challenging problems.To address these difficulties,we propose an automatic segmentation framework based on 3D U-net with dense connections and globally optimized refinement.First-ly,a deep U-net architecture with dense connections is trained to learn the probability map of the liver.Then the probability map goes into the following refinement step as the initial surface and prior shape.The segmentation of liver tumor is based on the similar network architecture with the help of segmentation results of liver.In order to reduce the infuence of the surrounding tissues with the similar intensity and texture behavior with the tumor region,during the training procedure,I x liverlabel is the input of the network for the segmentation of liver tumor.By do-ing this,the accuracy of segmentation can be improved.The proposed method is fully automatic without any user interaction.Both qualitative and quantitative results reveal that the pro-posed approach is efficient and accurate for liver volume estimation in clinical application.The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and non-reproducible manual segmentation method.
基金This research has been partially supported by National Science Foundation under grant IIS-1115417the National Natural Science Foundation of China under grant 61728205,61876217+1 种基金the“double first-class”international cooperation and development scientific research project of Changsha University of Science and Technology(No.2018IC25)the Science and Technology Development Project of Suzhou under grant SZS201609 and SYG201707.
文摘Accurate segmentation of CT images of liver tumors is an important adjunct for the liver diagnosis and treatment of liver diseases.In recent years,due to the great improvement of hard device,many deep learning based methods have been proposed for automatic liver segmentation.Among them,there are the plain neural network headed by FCN and the residual neural network headed by Resnet,both of which have many variations.They have achieved certain achievements in medical image segmentation.In this paper,we firstly select five representative structures,i.e.,FCN,U-Net,Segnet,Resnet and Densenet,to investigate their performance on liver segmentation.Since original Resnet and Densenet could not perform image segmentation directly,we make some adjustments for them to perform live segmentation.Our experimental results show that Densenet performs the best on liver segmentation,followed by Resnet.Both perform much better than Segnet,U-Net,and FCN.Among Segnet,U-Net,and FCN,U-Net performs the best,followed by Segnet.FCN performs the worst.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(DRI−KSU−415).
文摘The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel.