Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the d...Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the disease. To achieve this goal, an automatically detecting approach based on improved FCM(IFCM) as well as support vector machines(SVM) was established and studied. Firstly, color fundus images were segmented by IFCM, and candidate regions of EXs were obtained. Then, the SVM classifier is confirmed with the optimal subset of features and judgments of these candidate regions, as a result hard exudates are detected from fundus images. Our database was composed of 126 images with variable color, brightness, and quality. 70 of them were used to train the SVM and the remaining 56 to assess the performance of the method. Using a lesion based criterion, we achieved a mean sensitivity of 94.65% and a mean positive predictive value of 97.25%. With an image-based criterion, our approach reached a 100% mean sensitivity, 96.43% mean specificity and 98.21% mean accuracy. Furthermore, the average time cost in processing an image is 4.56 s. The results suggest that the proposed method can efficiently detect EXs from color fundus images and it could be a diagnostic aid for ophthalmologists in the screening for DR.展开更多
In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for...In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for this purpose,as well as for analysing eye abnormalities and diagnosing eye illnesses.Exudates can be recognised as bright lesions in fundus pictures,which can be thefirst indicator of diabetic retinopathy.With that in mind,the purpose of this work is to create an Integrated Model for Exudate and Diabetic Retinopathy Diagnosis(IM-EDRD)with multi-level classifications.The model uses Support Vector Machine(SVM)-based classification to separate normal and abnormal fundus images at thefirst level.The input pictures for SVM are pre-processed with Green Channel Extraction and the retrieved features are based on Gray Level Co-occurrence Matrix(GLCM).Furthermore,the presence of Exudate and Diabetic Retinopathy(DR)in fundus images is detected using the Adaptive Neuro Fuzzy Inference System(ANFIS)classifier at the second level of classification.Exudate detection,blood vessel extraction,and Optic Disc(OD)detection are all processed to achieve suitable results.Furthermore,the second level processing comprises Morphological Component Analysis(MCA)based image enhancement and object segmentation processes,as well as feature extraction for training the ANFIS classifier,to reliably diagnose DR.Furthermore,thefindings reveal that the proposed model surpasses existing models in terms of accuracy,time efficiency,and precision rate with the lowest possible error rate.展开更多
Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untr...Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untreated.Automa-tion of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also.Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted fea-ture selection or binary classification.This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images.For this research,the data has been collected and combined from three distinct sources.The images are preprocessed for enhancing the details.Six layers of the convolutional neural network(CNN)are used for the automated feature extraction and classification of 20 retinal diseases.It is observed that the results are reliant on the number of classes.For binary classification(healthy vs.unhealthy),up to 100%accuracy has been achieved.When 16 classes are used(treating stages of a disease as a single class),93.3%accuracy,92%sensitivity and 93%specificity have been obtained respectively.For 20 classes(treating stages of the disease as separate classes),the accuracy,sensitivity and specificity have dropped to 92.4%,92%and 92%respectively.展开更多
The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused ...The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused by glaucoma cannot be reversed,but the loss may be avoided if screened in the early stage of glaucoma.Thus,early screening of glaucoma is very requisite to preserve vision and maintain quality of life.Optic nerve head(ONH)assessment is a useful and practical technique among current glaucoma screening methods.Vertical CDR as one of the clinical indicators for ONH assessment,has been well-used by clinicians and professionals for the analysis and diagnosis of glaucoma.The key for automatic calculation of vertical CDR in fundus images is the segmentation of optic cup(OC)and optic disc(OD).We take a brief description of methodologies about the OC and disc optic segmentation and comprehensively presented these methods as two aspects:hand-craft feature and deep learning feature.Sliding window regression,super-pixel level,image reconstruction,super-pixel level low-rank representation(LRR),deep learning methodologies for segmentation of OD and OC have been shown.It is hoped that this paper can provide guidance and bring inspiration to other researchers.Every mentioned method has its advantages and limitations.Appropriate method should be selected or explored according to the actual situation.For automatic glaucoma screening,CDR is just the reflection for a small part of the disc,while utilizing comprehensive factors or multimodal images is the promising future direction to furthermore enhance the performance.展开更多
Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,...Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,a deep learning-based automated grading system of visual impairment in cataract patients is proposed using a multi-scale efficient channel attention convolutional neural network(MECA_CNN).First,the efficient channel attention mechanism is applied in the MECA_CNN to extract multi-scale features of fundus images,which can effectively focus on lesion-related regions.Then,the asymmetric convolutional modules are embedded in the residual unit to reduce the infor-mation loss of fine-grained features in fundus images.In addition,the asymmetric loss function is applied to address the problem of a higher false-negative rate and weak generalization ability caused by the imbalanced dataset.A total of 7299 fundus images derived from two clinical centers are em-ployed to develop and evaluate the MECA_CNN for identifying mild visual impairment caused by cataract(MVICC),moderate to severe visual impairment caused by cataract(MSVICC),and nor-mal sample.The experimental results demonstrate that the MECA_CNN provides clinically meaning-ful performance for visual impairment grading in the internal test dataset:MVICC(accuracy,sensi-tivity,and specificity;91.3%,89.9%,and 92%),MSVICC(93.2%,78.5%,and 96.7%),and normal sample(98.1%,98.0%,and 98.1%).The comparable performance in the external test dataset is achieved,further verifying the effectiveness and generalizability of the MECA_CNN model.This study provides a deep learning-based practical system for the automated grading of visu-al impairment in cataract patients,facilitating the formulation of treatment strategies in a timely man-ner and improving patients’vision prognosis.展开更多
This research focuses on the automatic detection and grading of microaneurysms in fundus images of diabetic retinopathy using artificial intelligence deep learning algorithms.By integrating multi-source fundus image d...This research focuses on the automatic detection and grading of microaneurysms in fundus images of diabetic retinopathy using artificial intelligence deep learning algorithms.By integrating multi-source fundus image data and undergoing a rigorous preprocessing workflow,a hybrid deep learning model architecture combining a modified U-Net and a residual neural network was adopted for the study.The experimental results show that the model achieved an accuracy of[X]%in microaneurysm detection,with a recall rate of[Y]%and a precision rate of[Z]%.In terms of grading diabetic retinopathy,the Cohen’s kappa coefficient for agreement with clinical grading was[K],and there were specific sensitivities and specificities for each grade.Compared with traditional methods,this model has significant advantages in processing speed and result consistency.However,it also has limitations such as insufficient data diversity,difficulties for the algorithm in detecting microaneurysms in severely hemorrhagic images,and high computational costs.The results of this research are of great significance for the early screening and clinical diagnosis decision support of diabetic retinopathy.In the future,it is necessary to further optimize the data and algorithms and promote clinical integration and telemedicine applications.展开更多
Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globall...Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globally.Fundus image segmentation depends on the optic disc(OD)and optic cup(OC).This paper proposes a computational model to segment and classify retinal fundus images for glaucoma detection.Different data augmentation techniques were applied to prevent overfitting while employing several data pre-processing approaches to improve the image quality and achieve high accuracy.The segmentation models are based on an attention U-Net with three separate convolutional neural networks(CNNs)backbones:Inception-v3,visual geometry group 19(VGG19),and residual neural network 50(ResNet50).The classification models also employ a modified version of the above three CNN architectures.Using the RIM-ONE dataset,the attention U-Net with the ResNet50 model as the encoder backbone,achieved the best accuracy of 99.58%in segmenting OD.The Inception-v3 model had the highest accuracy of 98.79%for glaucoma classification among the evaluated segmentation,followed by the modified classification architectures.展开更多
AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searche...AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searched three academic databases,including PubMed,Web of Science,and Ovid,with the date of August 2022.We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords,of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images.RESULTS:Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance,including diabetic retinopathy,glaucoma,age-related macular degeneration,retinal vein occlusions,retinal detachment,and other peripheral retinal diseases.Compared to fundus images,the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200°in a single exposure,which can observe more areas of the retina.CONCLUSION:The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.展开更多
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im...The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.展开更多
A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been...A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been addressed recently,such as smartphone apps used for remote health monitoring and eye treatment.In recent years,advances in diagnosis,prediction,and clinical decision support using Artificial Intelligence(AI)in medicine and ophthalmology have been exponential.Due to privacy concerns,a lack of data makes applying artificial intelligence models in the medical field challenging.To address this issue,a federated learning framework named CDFL based on a VGG16 deep neural network model is proposed in this research.The study collects data from the Ocular Disease Intelligent Recognition(ODIR)database containing 5,000 patient records.The significant features are extracted and normalized using the min-max normalization technique.In the federated learning-based technique,the VGG16 model is trained on the dataset individually after receiving model updates from two clients.Before transferring the attributes to the global model,the suggested method trains the local model.The global model subsequently improves the technique after integrating the new parameters.Every client analyses the results in three rounds to decrease the over-fitting problem.The experimental result shows the effectiveness of the federated learning-based technique on a Deep Neural Network(DNN),reaching a 95.28%accuracy while also providing privacy to the patient’s data.The experiment demonstrated that the suggested federated learning model outperforms other traditional methods,achieving client 1 accuracy of 95.0%and client 2 accuracy of 96.0%.展开更多
AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 e...AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 eyes) with fresh primary RRD and causative retinal break and vitreous traction were presented. All the patients underwent PPV with air tamponade. Visual acuity(VA) was examined postoperatively and images were captured by ultrawidefield scanning laser ophthalmoscope system(Optos). RESULTS: Initial reattachment was achieved in 25 cases(100%). The air volume was 〉60% on the postoperative day(POD) 1. The ultra-widefield images showed that the retina was reattached in all air-filled eyes postoperatively. The retinal break and laser burns in the superior were detected in 22 of 25 eyes(88%). A missed retinal hole was found under intravitreal air bubble in 1 case(4%). The air volume was range from 40% to 60% on POD 3. A doublelayered image was seen in 25 of 25 eyes with intravitreal gas. Retinal breaks and laser burns around were seen in the intravitreal air. On POD 7, small bubble without effect was seen in 6 cases(24%) and bubble was completely disappeared in 4 cases(16%). Small oval bubble in the superior area was observed in 15 cases(60%). There were no missed and new retinal breaks and no retinal detachment in all cases on the POD 14 and 1 mo and last follow-up. Air disappeared completely on a mean of 9.84 d postoperatively. The mean final postoperative bestcorrected visual acuity(BCVA) was 0.35 log MAR. Mean final postoperative BCVA improved significantly relative to mean preoperative(P〈0.05). Final VA of 0.3 log MAR or better was seen in 13 eyes. CONCLUSION: PPV with air tamponade is an effective management for fresh RRD with superior retinal breaks. The ultra-widefield fundus imaging can detect postoperative retinal breaks in air-filled eyes. It would be a useful facility for follow-up after PPV with air tamponade. Facedown position and acquired visual rehabilitation may be shorten.展开更多
Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it co...Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods(fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques(adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography). Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors(cones and rods), fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.展开更多
Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vis...Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vision ofoptic nerves and quality of life. Classification of Glaucoma has been an active fieldof research for the past ten years. Several approaches for Glaucoma classification areestablished, beginning with conventional segmentation methods and feature-extraction to deep-learning techniques such as Convolution Neural Networks (CNN). Incontrast, CNN classifies the input images directly using tuned parameters of convolution and pooling layers by extracting features. But, the volume of training datasetsdetermines the performance of the CNN;the model trained with small datasets,overfit issues arise. CNN has therefore developed with transfer learning. The primary aim of this study is to explore the potential of EfficientNet with transfer learning for the classification of Glaucoma. The performance of the current workcompares with other models, namely VGG16, InceptionV3, and Xception usingpublic datasets such as RIM-ONEV2 & V3, ORIGA, DRISHTI-GS1, HRF, andACRIMA. The dataset has split into training, validation, and testing with the ratioof 70:15:15. The assessment of the test dataset shows that the pre-trained EfficientNetB4 has achieved the highest performance value compared to other models listedabove. The proposed method achieved 99.38% accuracy and also better results forother metrics, such as sensitivity, specificity, precision, F1_score, Kappa score, andArea Under Curve (AUC) compared to other models.展开更多
Diabetes is a serious health condition that can cause several issues in human body organs such as the heart and kidney as well as a serious eye disease called diabetic retinopathy(DR).Early detection and treatment are...Diabetes is a serious health condition that can cause several issues in human body organs such as the heart and kidney as well as a serious eye disease called diabetic retinopathy(DR).Early detection and treatment are crucial to prevent complete blindness or partial vision loss.Traditional detection methods,which involve ophthalmologists examining retinal fundus images,are subjective,expensive,and time-consuming.Therefore,this study employs artificial intelligence(AI)technology to perform faster and more accurate binary classifications and determine the presence of DR.In this regard,we employed three promising machine learning models namely,support vector machine(SVM),k-nearest neighbors(KNN),and Histogram Gradient Boosting(HGB),after carefully selecting features using transfer learning on the fundus images of the Asia Pacific Tele-Ophthalmology Society(APTOS)(a standard dataset),which includes 3662 images and originally categorized DR into five levels,now simplified to a binary format:No DR and DR(Classes 1-4).The results demonstrate that the SVM model outperformed the other approaches in the literature with the same dataset,achieving an excellent accuracy of 96.9%,compared to 95.6%for both the KNN and HGB models.This approach is evaluated by medical health professionals and offers a valuable pathway for the early detection of DR and can be successfully employed as a clinical decision support system.展开更多
Glaucoma disease causes irreversible damage to the optical nerve and it has the potential to cause permanent loss of vision.Glaucoma ranks as the second most prevalent cause of permanent blindness.Traditional glaucoma...Glaucoma disease causes irreversible damage to the optical nerve and it has the potential to cause permanent loss of vision.Glaucoma ranks as the second most prevalent cause of permanent blindness.Traditional glaucoma diagnosis requires a highly experienced specialist,costly equipment,and a lengthy wait time.For automatic glaucoma detection,state-of-the-art glaucoma detection methods include a segmentation-based method to calculate the cup-to-disc ratio.Other methods include multi-label segmentation networks and learning-based methods and rely on hand-crafted features.Localizing the optic disc(OD)is one of the key features in retinal images for detecting retinal diseases,especially for glaucoma disease detection.The approach presented in this study is based on deep classifiers for OD segmentation and glaucoma detection.First,the optic disc detection process is based on object detection using a Mask Region-Based Convolutional Neural Network(Mask-RCNN).The OD detection task was validated using the Dice score,intersection over union,and accuracy metrics.The OD region is then fed into the second stage for glaucoma detection.Therefore,considering only the OD area for glaucoma detection will reduce the number of classification artifacts by limiting the assessment to the optic disc area.For this task,VGG-16(Visual Geometry Group),Resnet-18(Residual Network),and Inception-v3 were pre-trained and fine-tuned.We also used the Support Vector Machine Classifier.The feature-based method uses region content features obtained by Histogram of Oriented Gradients(HOG)and Gabor Filters.The final decision is based on weighted fusion.A comparison of the obtained results from all classification approaches is provided.Classification metrics including accuracy and ROC curve are compared for each classification method.The novelty of this research project is the integration of automatic OD detection and glaucoma diagnosis in a global method.Moreover,the fusion-based decision system uses the glaucoma detection result obtained using several convolutional deep neural networks and the support vector machine classifier.These classification methods contribute to producing robust classification results.This method was evaluated using well-known retinal images available for research work and a combined dataset including retinal images with and without pathology.The performance of the models was tested on two public datasets and a combined dataset and was compared to similar research.The research findings show the potential of this methodology in the early detection of glaucoma,which will reduce diagnosis time and increase detection efficiency.The glaucoma assessment achieves about 98%accuracy in the classification rate,which is close to and even higher than that of state-of-the-art methods.The designed detection model may be used in telemedicine,healthcare,and computer-aided diagnosis systems.展开更多
The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernet...The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel.展开更多
In past decades,retinal diseases have become more common and affect people of all age grounds over the globe.For examining retinal eye disease,an artificial intelligence(AI)based multilabel classification model is nee...In past decades,retinal diseases have become more common and affect people of all age grounds over the globe.For examining retinal eye disease,an artificial intelligence(AI)based multilabel classification model is needed for automated diagnosis.To analyze the retinal malady,the system proposes a multiclass and multi-label arrangement method.Therefore,the classification frameworks based on features are explicitly described by ophthalmologists under the application of domain knowledge,which tends to be time-consuming,vulnerable generalization ability,and unfeasible in massive datasets.Therefore,the automated diagnosis of multi-retinal diseases becomes essential,which can be solved by the deep learning(DL)models.With this motivation,this paper presents an intelligent deep learningbased multi-retinal disease diagnosis(IDL-MRDD)framework using fundus images.The proposed model aims to classify the color fundus images into different classes namely AMD,DR,Glaucoma,Hypertensive Retinopathy,Normal,Others,and Pathological Myopia.Besides,the artificial flora algorithm with Shannon’s function(AFA-SF)basedmulti-level thresholding technique is employed for image segmentation and thereby the infected regions can be properly detected.In addition,SqueezeNet based feature extractor is employed to generate a collection of feature vectors.Finally,the stacked sparse Autoencoder(SSAE)model is applied as a classifier to distinguish the input images into distinct retinal diseases.The efficacy of the IDL-MRDD technique is carried out on a benchmark multi-retinal disease dataset,comprising data instances from different classes.The experimental values pointed out the superior outcome over the existing techniques with the maximum accuracy of 0.963.展开更多
Diabetic retinopathy(DR)diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features.This task is very difficult for ophthalmologists and timeco...Diabetic retinopathy(DR)diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features.This task is very difficult for ophthalmologists and timeconsuming.Therefore,many computer-aided diagnosis(CAD)systems were developed to automate this screening process ofDR.In this paper,aCAD-DR system is proposed based on preprocessing and a pre-train transfer learningbased convolutional neural network(PCNN)to recognize the five stages of DR through retinal fundus images.To develop this CAD-DR system,a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results.The architecture of the PCNN model is based on three main phases.Firstly,the training process of the proposed PCNN is accomplished by using the expected gradient length(EGL)to decrease the image labeling efforts during the training of the CNN model.Secondly,themost informative patches and images were automatically selected using a few pieces of training labeled samples.Thirdly,the PCNN method generated useful masks for prognostication and identified regions of interest.Fourthly,the DR-related lesions involved in the classification task such as micro-aneurysms,hemorrhages,and exudates were detected and then used for recognition of DR.The PCNN model is pre-trained using a high-end graphical processor unit(GPU)on the publicly available Kaggle benchmark.The obtained results demonstrate that the CAD-DR system outperforms compared to other state-of-the-art in terms of sensitivity(SE),specificity(SP),and accuracy(ACC).On the test set of 30,000 images,the CAD-DR system achieved an average SE of 93.20%,SP of 96.10%,and ACC of 98%.This result indicates that the proposed CAD-DR system is appropriate for the screening of the severity-level of DR.展开更多
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
基金Supported by the National High Technology Research and Development Program of China(863 Program)(No.2006AA020804)Fundamental Research Funds for the Central Universities(No.NJ20120007)+2 种基金Jiangsu Province Science and Technology Support Plan(No.BE2010652)Program Sponsored for Scientific Innovation Research of College Graduate in Jangsu Province(No.CXLX11_0218)Shanghai University Scientific Selection and Cultivation for Outstanding Young Teachers in Special Fund(No.ZZGCD15081)
文摘Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the disease. To achieve this goal, an automatically detecting approach based on improved FCM(IFCM) as well as support vector machines(SVM) was established and studied. Firstly, color fundus images were segmented by IFCM, and candidate regions of EXs were obtained. Then, the SVM classifier is confirmed with the optimal subset of features and judgments of these candidate regions, as a result hard exudates are detected from fundus images. Our database was composed of 126 images with variable color, brightness, and quality. 70 of them were used to train the SVM and the remaining 56 to assess the performance of the method. Using a lesion based criterion, we achieved a mean sensitivity of 94.65% and a mean positive predictive value of 97.25%. With an image-based criterion, our approach reached a 100% mean sensitivity, 96.43% mean specificity and 98.21% mean accuracy. Furthermore, the average time cost in processing an image is 4.56 s. The results suggest that the proposed method can efficiently detect EXs from color fundus images and it could be a diagnostic aid for ophthalmologists in the screening for DR.
文摘In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for this purpose,as well as for analysing eye abnormalities and diagnosing eye illnesses.Exudates can be recognised as bright lesions in fundus pictures,which can be thefirst indicator of diabetic retinopathy.With that in mind,the purpose of this work is to create an Integrated Model for Exudate and Diabetic Retinopathy Diagnosis(IM-EDRD)with multi-level classifications.The model uses Support Vector Machine(SVM)-based classification to separate normal and abnormal fundus images at thefirst level.The input pictures for SVM are pre-processed with Green Channel Extraction and the retrieved features are based on Gray Level Co-occurrence Matrix(GLCM).Furthermore,the presence of Exudate and Diabetic Retinopathy(DR)in fundus images is detected using the Adaptive Neuro Fuzzy Inference System(ANFIS)classifier at the second level of classification.Exudate detection,blood vessel extraction,and Optic Disc(OD)detection are all processed to achieve suitable results.Furthermore,the second level processing comprises Morphological Component Analysis(MCA)based image enhancement and object segmentation processes,as well as feature extraction for training the ANFIS classifier,to reliably diagnose DR.Furthermore,thefindings reveal that the proposed model surpasses existing models in terms of accuracy,time efficiency,and precision rate with the lowest possible error rate.
文摘Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untreated.Automa-tion of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also.Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted fea-ture selection or binary classification.This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images.For this research,the data has been collected and combined from three distinct sources.The images are preprocessed for enhancing the details.Six layers of the convolutional neural network(CNN)are used for the automated feature extraction and classification of 20 retinal diseases.It is observed that the results are reliant on the number of classes.For binary classification(healthy vs.unhealthy),up to 100%accuracy has been achieved.When 16 classes are used(treating stages of a disease as a single class),93.3%accuracy,92%sensitivity and 93%specificity have been obtained respectively.For 20 classes(treating stages of the disease as separate classes),the accuracy,sensitivity and specificity have dropped to 92.4%,92%and 92%respectively.
基金supported by the National Natural Science Foundation of China under Grant No.61772118.
文摘The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused by glaucoma cannot be reversed,but the loss may be avoided if screened in the early stage of glaucoma.Thus,early screening of glaucoma is very requisite to preserve vision and maintain quality of life.Optic nerve head(ONH)assessment is a useful and practical technique among current glaucoma screening methods.Vertical CDR as one of the clinical indicators for ONH assessment,has been well-used by clinicians and professionals for the analysis and diagnosis of glaucoma.The key for automatic calculation of vertical CDR in fundus images is the segmentation of optic cup(OC)and optic disc(OD).We take a brief description of methodologies about the OC and disc optic segmentation and comprehensively presented these methods as two aspects:hand-craft feature and deep learning feature.Sliding window regression,super-pixel level,image reconstruction,super-pixel level low-rank representation(LRR),deep learning methodologies for segmentation of OD and OC have been shown.It is hoped that this paper can provide guidance and bring inspiration to other researchers.Every mentioned method has its advantages and limitations.Appropriate method should be selected or explored according to the actual situation.For automatic glaucoma screening,CDR is just the reflection for a small part of the disc,while utilizing comprehensive factors or multimodal images is the promising future direction to furthermore enhance the performance.
基金the National Natural Science Foundation of China(No.62276210,82201148,61775180)the Natural Science Basic Research Program of Shaanxi Province(No.2022JM-380)+3 种基金the Shaanxi Province College Students'Innovation and Entrepreneurship Training Program(No.S202311664128X)the Natural Science Foundation of Zhejiang Province(No.LQ22H120002)the Medical Health Science and Technology Project of Zhejiang Province(No.2022RC069,2023KY1140)the Natural Science Foundation of Ningbo(No.2023J390)。
文摘Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,a deep learning-based automated grading system of visual impairment in cataract patients is proposed using a multi-scale efficient channel attention convolutional neural network(MECA_CNN).First,the efficient channel attention mechanism is applied in the MECA_CNN to extract multi-scale features of fundus images,which can effectively focus on lesion-related regions.Then,the asymmetric convolutional modules are embedded in the residual unit to reduce the infor-mation loss of fine-grained features in fundus images.In addition,the asymmetric loss function is applied to address the problem of a higher false-negative rate and weak generalization ability caused by the imbalanced dataset.A total of 7299 fundus images derived from two clinical centers are em-ployed to develop and evaluate the MECA_CNN for identifying mild visual impairment caused by cataract(MVICC),moderate to severe visual impairment caused by cataract(MSVICC),and nor-mal sample.The experimental results demonstrate that the MECA_CNN provides clinically meaning-ful performance for visual impairment grading in the internal test dataset:MVICC(accuracy,sensi-tivity,and specificity;91.3%,89.9%,and 92%),MSVICC(93.2%,78.5%,and 96.7%),and normal sample(98.1%,98.0%,and 98.1%).The comparable performance in the external test dataset is achieved,further verifying the effectiveness and generalizability of the MECA_CNN model.This study provides a deep learning-based practical system for the automated grading of visu-al impairment in cataract patients,facilitating the formulation of treatment strategies in a timely man-ner and improving patients’vision prognosis.
文摘This research focuses on the automatic detection and grading of microaneurysms in fundus images of diabetic retinopathy using artificial intelligence deep learning algorithms.By integrating multi-source fundus image data and undergoing a rigorous preprocessing workflow,a hybrid deep learning model architecture combining a modified U-Net and a residual neural network was adopted for the study.The experimental results show that the model achieved an accuracy of[X]%in microaneurysm detection,with a recall rate of[Y]%and a precision rate of[Z]%.In terms of grading diabetic retinopathy,the Cohen’s kappa coefficient for agreement with clinical grading was[K],and there were specific sensitivities and specificities for each grade.Compared with traditional methods,this model has significant advantages in processing speed and result consistency.However,it also has limitations such as insufficient data diversity,difficulties for the algorithm in detecting microaneurysms in severely hemorrhagic images,and high computational costs.The results of this research are of great significance for the early screening and clinical diagnosis decision support of diabetic retinopathy.In the future,it is necessary to further optimize the data and algorithms and promote clinical integration and telemedicine applications.
文摘Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globally.Fundus image segmentation depends on the optic disc(OD)and optic cup(OC).This paper proposes a computational model to segment and classify retinal fundus images for glaucoma detection.Different data augmentation techniques were applied to prevent overfitting while employing several data pre-processing approaches to improve the image quality and achieve high accuracy.The segmentation models are based on an attention U-Net with three separate convolutional neural networks(CNNs)backbones:Inception-v3,visual geometry group 19(VGG19),and residual neural network 50(ResNet50).The classification models also employ a modified version of the above three CNN architectures.Using the RIM-ONE dataset,the attention U-Net with the ResNet50 model as the encoder backbone,achieved the best accuracy of 99.58%in segmenting OD.The Inception-v3 model had the highest accuracy of 98.79%for glaucoma classification among the evaluated segmentation,followed by the modified classification architectures.
基金Supported by 1.3.5 Project for Disciplines of Excellence,West China Hospital,Sichuan University(No.ZYJC21025).
文摘AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searched three academic databases,including PubMed,Web of Science,and Ovid,with the date of August 2022.We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords,of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images.RESULTS:Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance,including diabetic retinopathy,glaucoma,age-related macular degeneration,retinal vein occlusions,retinal detachment,and other peripheral retinal diseases.Compared to fundus images,the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200°in a single exposure,which can observe more areas of the retina.CONCLUSION:The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.
基金supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City,grant numbers ZXL2021425 and ZXL2022476Doctor of Innovation and Entrepreneurship Program in Jiangsu Province,grant number JSSCBS20211440+6 种基金Jiangsu Province Key R&D Program,grant number BE2019682Natural Science Foundation of Jiangsu Province,grant number BK20200214National Key R&D Program of China,grant number 2017YFB0403701National Natural Science Foundation of China,grant numbers 61605210,61675226,and 62075235Youth Innovation Promotion Association of Chinese Academy of Sciences,grant number 2019320Frontier Science Research Project of the Chinese Academy of Sciences,grant number QYZDB-SSW-JSC03Strategic Priority Research Program of the Chinese Academy of Sciences,grant number XDB02060000.
文摘The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.
基金Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through Project Number 959.
文摘A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been addressed recently,such as smartphone apps used for remote health monitoring and eye treatment.In recent years,advances in diagnosis,prediction,and clinical decision support using Artificial Intelligence(AI)in medicine and ophthalmology have been exponential.Due to privacy concerns,a lack of data makes applying artificial intelligence models in the medical field challenging.To address this issue,a federated learning framework named CDFL based on a VGG16 deep neural network model is proposed in this research.The study collects data from the Ocular Disease Intelligent Recognition(ODIR)database containing 5,000 patient records.The significant features are extracted and normalized using the min-max normalization technique.In the federated learning-based technique,the VGG16 model is trained on the dataset individually after receiving model updates from two clients.Before transferring the attributes to the global model,the suggested method trains the local model.The global model subsequently improves the technique after integrating the new parameters.Every client analyses the results in three rounds to decrease the over-fitting problem.The experimental result shows the effectiveness of the federated learning-based technique on a Deep Neural Network(DNN),reaching a 95.28%accuracy while also providing privacy to the patient’s data.The experiment demonstrated that the suggested federated learning model outperforms other traditional methods,achieving client 1 accuracy of 95.0%and client 2 accuracy of 96.0%.
文摘AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 eyes) with fresh primary RRD and causative retinal break and vitreous traction were presented. All the patients underwent PPV with air tamponade. Visual acuity(VA) was examined postoperatively and images were captured by ultrawidefield scanning laser ophthalmoscope system(Optos). RESULTS: Initial reattachment was achieved in 25 cases(100%). The air volume was 〉60% on the postoperative day(POD) 1. The ultra-widefield images showed that the retina was reattached in all air-filled eyes postoperatively. The retinal break and laser burns in the superior were detected in 22 of 25 eyes(88%). A missed retinal hole was found under intravitreal air bubble in 1 case(4%). The air volume was range from 40% to 60% on POD 3. A doublelayered image was seen in 25 of 25 eyes with intravitreal gas. Retinal breaks and laser burns around were seen in the intravitreal air. On POD 7, small bubble without effect was seen in 6 cases(24%) and bubble was completely disappeared in 4 cases(16%). Small oval bubble in the superior area was observed in 15 cases(60%). There were no missed and new retinal breaks and no retinal detachment in all cases on the POD 14 and 1 mo and last follow-up. Air disappeared completely on a mean of 9.84 d postoperatively. The mean final postoperative bestcorrected visual acuity(BCVA) was 0.35 log MAR. Mean final postoperative BCVA improved significantly relative to mean preoperative(P〈0.05). Final VA of 0.3 log MAR or better was seen in 13 eyes. CONCLUSION: PPV with air tamponade is an effective management for fresh RRD with superior retinal breaks. The ultra-widefield fundus imaging can detect postoperative retinal breaks in air-filled eyes. It would be a useful facility for follow-up after PPV with air tamponade. Facedown position and acquired visual rehabilitation may be shorten.
基金Supported by National Key Scientific Instrument and Equipment Development Project of China (No.2012YQ12008005)
文摘Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods(fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques(adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography). Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors(cones and rods), fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.
文摘Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vision ofoptic nerves and quality of life. Classification of Glaucoma has been an active fieldof research for the past ten years. Several approaches for Glaucoma classification areestablished, beginning with conventional segmentation methods and feature-extraction to deep-learning techniques such as Convolution Neural Networks (CNN). Incontrast, CNN classifies the input images directly using tuned parameters of convolution and pooling layers by extracting features. But, the volume of training datasetsdetermines the performance of the CNN;the model trained with small datasets,overfit issues arise. CNN has therefore developed with transfer learning. The primary aim of this study is to explore the potential of EfficientNet with transfer learning for the classification of Glaucoma. The performance of the current workcompares with other models, namely VGG16, InceptionV3, and Xception usingpublic datasets such as RIM-ONEV2 & V3, ORIGA, DRISHTI-GS1, HRF, andACRIMA. The dataset has split into training, validation, and testing with the ratioof 70:15:15. The assessment of the test dataset shows that the pre-trained EfficientNetB4 has achieved the highest performance value compared to other models listedabove. The proposed method achieved 99.38% accuracy and also better results forother metrics, such as sensitivity, specificity, precision, F1_score, Kappa score, andArea Under Curve (AUC) compared to other models.
文摘Diabetes is a serious health condition that can cause several issues in human body organs such as the heart and kidney as well as a serious eye disease called diabetic retinopathy(DR).Early detection and treatment are crucial to prevent complete blindness or partial vision loss.Traditional detection methods,which involve ophthalmologists examining retinal fundus images,are subjective,expensive,and time-consuming.Therefore,this study employs artificial intelligence(AI)technology to perform faster and more accurate binary classifications and determine the presence of DR.In this regard,we employed three promising machine learning models namely,support vector machine(SVM),k-nearest neighbors(KNN),and Histogram Gradient Boosting(HGB),after carefully selecting features using transfer learning on the fundus images of the Asia Pacific Tele-Ophthalmology Society(APTOS)(a standard dataset),which includes 3662 images and originally categorized DR into five levels,now simplified to a binary format:No DR and DR(Classes 1-4).The results demonstrate that the SVM model outperformed the other approaches in the literature with the same dataset,achieving an excellent accuracy of 96.9%,compared to 95.6%for both the KNN and HGB models.This approach is evaluated by medical health professionals and offers a valuable pathway for the early detection of DR and can be successfully employed as a clinical decision support system.
基金Deanship of Scientific Research,Princess Nourah bint Abdulrahman University,through the Program of Research Project Funding after Publication,Grant No(43-PRFA-P-31).
文摘Glaucoma disease causes irreversible damage to the optical nerve and it has the potential to cause permanent loss of vision.Glaucoma ranks as the second most prevalent cause of permanent blindness.Traditional glaucoma diagnosis requires a highly experienced specialist,costly equipment,and a lengthy wait time.For automatic glaucoma detection,state-of-the-art glaucoma detection methods include a segmentation-based method to calculate the cup-to-disc ratio.Other methods include multi-label segmentation networks and learning-based methods and rely on hand-crafted features.Localizing the optic disc(OD)is one of the key features in retinal images for detecting retinal diseases,especially for glaucoma disease detection.The approach presented in this study is based on deep classifiers for OD segmentation and glaucoma detection.First,the optic disc detection process is based on object detection using a Mask Region-Based Convolutional Neural Network(Mask-RCNN).The OD detection task was validated using the Dice score,intersection over union,and accuracy metrics.The OD region is then fed into the second stage for glaucoma detection.Therefore,considering only the OD area for glaucoma detection will reduce the number of classification artifacts by limiting the assessment to the optic disc area.For this task,VGG-16(Visual Geometry Group),Resnet-18(Residual Network),and Inception-v3 were pre-trained and fine-tuned.We also used the Support Vector Machine Classifier.The feature-based method uses region content features obtained by Histogram of Oriented Gradients(HOG)and Gabor Filters.The final decision is based on weighted fusion.A comparison of the obtained results from all classification approaches is provided.Classification metrics including accuracy and ROC curve are compared for each classification method.The novelty of this research project is the integration of automatic OD detection and glaucoma diagnosis in a global method.Moreover,the fusion-based decision system uses the glaucoma detection result obtained using several convolutional deep neural networks and the support vector machine classifier.These classification methods contribute to producing robust classification results.This method was evaluated using well-known retinal images available for research work and a combined dataset including retinal images with and without pathology.The performance of the models was tested on two public datasets and a combined dataset and was compared to similar research.The research findings show the potential of this methodology in the early detection of glaucoma,which will reduce diagnosis time and increase detection efficiency.The glaucoma assessment achieves about 98%accuracy in the classification rate,which is close to and even higher than that of state-of-the-art methods.The designed detection model may be used in telemedicine,healthcare,and computer-aided diagnosis systems.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(DRI−KSU−415).
文摘The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.NRF-2021R1A2C1010362)the Soonchun-hyang University Research Fund.
文摘In past decades,retinal diseases have become more common and affect people of all age grounds over the globe.For examining retinal eye disease,an artificial intelligence(AI)based multilabel classification model is needed for automated diagnosis.To analyze the retinal malady,the system proposes a multiclass and multi-label arrangement method.Therefore,the classification frameworks based on features are explicitly described by ophthalmologists under the application of domain knowledge,which tends to be time-consuming,vulnerable generalization ability,and unfeasible in massive datasets.Therefore,the automated diagnosis of multi-retinal diseases becomes essential,which can be solved by the deep learning(DL)models.With this motivation,this paper presents an intelligent deep learningbased multi-retinal disease diagnosis(IDL-MRDD)framework using fundus images.The proposed model aims to classify the color fundus images into different classes namely AMD,DR,Glaucoma,Hypertensive Retinopathy,Normal,Others,and Pathological Myopia.Besides,the artificial flora algorithm with Shannon’s function(AFA-SF)basedmulti-level thresholding technique is employed for image segmentation and thereby the infected regions can be properly detected.In addition,SqueezeNet based feature extractor is employed to generate a collection of feature vectors.Finally,the stacked sparse Autoencoder(SSAE)model is applied as a classifier to distinguish the input images into distinct retinal diseases.The efficacy of the IDL-MRDD technique is carried out on a benchmark multi-retinal disease dataset,comprising data instances from different classes.The experimental values pointed out the superior outcome over the existing techniques with the maximum accuracy of 0.963.
基金Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding this work through Research Group no.RG-21-07-01.
文摘Diabetic retinopathy(DR)diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features.This task is very difficult for ophthalmologists and timeconsuming.Therefore,many computer-aided diagnosis(CAD)systems were developed to automate this screening process ofDR.In this paper,aCAD-DR system is proposed based on preprocessing and a pre-train transfer learningbased convolutional neural network(PCNN)to recognize the five stages of DR through retinal fundus images.To develop this CAD-DR system,a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results.The architecture of the PCNN model is based on three main phases.Firstly,the training process of the proposed PCNN is accomplished by using the expected gradient length(EGL)to decrease the image labeling efforts during the training of the CNN model.Secondly,themost informative patches and images were automatically selected using a few pieces of training labeled samples.Thirdly,the PCNN method generated useful masks for prognostication and identified regions of interest.Fourthly,the DR-related lesions involved in the classification task such as micro-aneurysms,hemorrhages,and exudates were detected and then used for recognition of DR.The PCNN model is pre-trained using a high-end graphical processor unit(GPU)on the publicly available Kaggle benchmark.The obtained results demonstrate that the CAD-DR system outperforms compared to other state-of-the-art in terms of sensitivity(SE),specificity(SP),and accuracy(ACC).On the test set of 30,000 images,the CAD-DR system achieved an average SE of 93.20%,SP of 96.10%,and ACC of 98%.This result indicates that the proposed CAD-DR system is appropriate for the screening of the severity-level of DR.