Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searche...AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searched three academic databases,including PubMed,Web of Science,and Ovid,with the date of August 2022.We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords,of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images.RESULTS:Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance,including diabetic retinopathy,glaucoma,age-related macular degeneration,retinal vein occlusions,retinal detachment,and other peripheral retinal diseases.Compared to fundus images,the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200°in a single exposure,which can observe more areas of the retina.CONCLUSION:The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.展开更多
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im...The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.展开更多
Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untr...Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untreated.Automa-tion of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also.Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted fea-ture selection or binary classification.This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images.For this research,the data has been collected and combined from three distinct sources.The images are preprocessed for enhancing the details.Six layers of the convolutional neural network(CNN)are used for the automated feature extraction and classification of 20 retinal diseases.It is observed that the results are reliant on the number of classes.For binary classification(healthy vs.unhealthy),up to 100%accuracy has been achieved.When 16 classes are used(treating stages of a disease as a single class),93.3%accuracy,92%sensitivity and 93%specificity have been obtained respectively.For 20 classes(treating stages of the disease as separate classes),the accuracy,sensitivity and specificity have dropped to 92.4%,92%and 92%respectively.展开更多
In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for...In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for this purpose,as well as for analysing eye abnormalities and diagnosing eye illnesses.Exudates can be recognised as bright lesions in fundus pictures,which can be thefirst indicator of diabetic retinopathy.With that in mind,the purpose of this work is to create an Integrated Model for Exudate and Diabetic Retinopathy Diagnosis(IM-EDRD)with multi-level classifications.The model uses Support Vector Machine(SVM)-based classification to separate normal and abnormal fundus images at thefirst level.The input pictures for SVM are pre-processed with Green Channel Extraction and the retrieved features are based on Gray Level Co-occurrence Matrix(GLCM).Furthermore,the presence of Exudate and Diabetic Retinopathy(DR)in fundus images is detected using the Adaptive Neuro Fuzzy Inference System(ANFIS)classifier at the second level of classification.Exudate detection,blood vessel extraction,and Optic Disc(OD)detection are all processed to achieve suitable results.Furthermore,the second level processing comprises Morphological Component Analysis(MCA)based image enhancement and object segmentation processes,as well as feature extraction for training the ANFIS classifier,to reliably diagnose DR.Furthermore,thefindings reveal that the proposed model surpasses existing models in terms of accuracy,time efficiency,and precision rate with the lowest possible error rate.展开更多
Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,...Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,a deep learning-based automated grading system of visual impairment in cataract patients is proposed using a multi-scale efficient channel attention convolutional neural network(MECA_CNN).First,the efficient channel attention mechanism is applied in the MECA_CNN to extract multi-scale features of fundus images,which can effectively focus on lesion-related regions.Then,the asymmetric convolutional modules are embedded in the residual unit to reduce the infor-mation loss of fine-grained features in fundus images.In addition,the asymmetric loss function is applied to address the problem of a higher false-negative rate and weak generalization ability caused by the imbalanced dataset.A total of 7299 fundus images derived from two clinical centers are em-ployed to develop and evaluate the MECA_CNN for identifying mild visual impairment caused by cataract(MVICC),moderate to severe visual impairment caused by cataract(MSVICC),and nor-mal sample.The experimental results demonstrate that the MECA_CNN provides clinically meaning-ful performance for visual impairment grading in the internal test dataset:MVICC(accuracy,sensi-tivity,and specificity;91.3%,89.9%,and 92%),MSVICC(93.2%,78.5%,and 96.7%),and normal sample(98.1%,98.0%,and 98.1%).The comparable performance in the external test dataset is achieved,further verifying the effectiveness and generalizability of the MECA_CNN model.This study provides a deep learning-based practical system for the automated grading of visu-al impairment in cataract patients,facilitating the formulation of treatment strategies in a timely man-ner and improving patients’vision prognosis.展开更多
Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the d...Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the disease. To achieve this goal, an automatically detecting approach based on improved FCM(IFCM) as well as support vector machines(SVM) was established and studied. Firstly, color fundus images were segmented by IFCM, and candidate regions of EXs were obtained. Then, the SVM classifier is confirmed with the optimal subset of features and judgments of these candidate regions, as a result hard exudates are detected from fundus images. Our database was composed of 126 images with variable color, brightness, and quality. 70 of them were used to train the SVM and the remaining 56 to assess the performance of the method. Using a lesion based criterion, we achieved a mean sensitivity of 94.65% and a mean positive predictive value of 97.25%. With an image-based criterion, our approach reached a 100% mean sensitivity, 96.43% mean specificity and 98.21% mean accuracy. Furthermore, the average time cost in processing an image is 4.56 s. The results suggest that the proposed method can efficiently detect EXs from color fundus images and it could be a diagnostic aid for ophthalmologists in the screening for DR.展开更多
Optic disc(OD)detection is a main step while developing automated screening systems for diabetic retinopathy.We present a method to automatically locate and extract the OD in digital retinal fundus images.Based on the...Optic disc(OD)detection is a main step while developing automated screening systems for diabetic retinopathy.We present a method to automatically locate and extract the OD in digital retinal fundus images.Based on the property that main blood vessels gather in OD,the method starts with Otsu thresholding segmentation to obtain candidate regions of OD.Consequently,the main blood vessels which are segmented in H channel of color fundus images in Hue saturation value(HSV)space.Finally,a weighted vessels’direction matched filter is proposed to roughly match the direction of the main blood vessels to get the OD center which is used to pick the true OD out from the candidate regions of OD.The proposed method was evaluated on a dataset containing 100 fundus images of both normal and diseased retinas and the accuracy reaches 98%.Furthermore,the average time cost in processing an image is 1.3 s.Results suggest that the approach is reliable,and can efficiently detect OD from fundus images.展开更多
The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused ...The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused by glaucoma cannot be reversed,but the loss may be avoided if screened in the early stage of glaucoma.Thus,early screening of glaucoma is very requisite to preserve vision and maintain quality of life.Optic nerve head(ONH)assessment is a useful and practical technique among current glaucoma screening methods.Vertical CDR as one of the clinical indicators for ONH assessment,has been well-used by clinicians and professionals for the analysis and diagnosis of glaucoma.The key for automatic calculation of vertical CDR in fundus images is the segmentation of optic cup(OC)and optic disc(OD).We take a brief description of methodologies about the OC and disc optic segmentation and comprehensively presented these methods as two aspects:hand-craft feature and deep learning feature.Sliding window regression,super-pixel level,image reconstruction,super-pixel level low-rank representation(LRR),deep learning methodologies for segmentation of OD and OC have been shown.It is hoped that this paper can provide guidance and bring inspiration to other researchers.Every mentioned method has its advantages and limitations.Appropriate method should be selected or explored according to the actual situation.For automatic glaucoma screening,CDR is just the reflection for a small part of the disc,while utilizing comprehensive factors or multimodal images is the promising future direction to furthermore enhance the performance.展开更多
A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been...A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been addressed recently,such as smartphone apps used for remote health monitoring and eye treatment.In recent years,advances in diagnosis,prediction,and clinical decision support using Artificial Intelligence(AI)in medicine and ophthalmology have been exponential.Due to privacy concerns,a lack of data makes applying artificial intelligence models in the medical field challenging.To address this issue,a federated learning framework named CDFL based on a VGG16 deep neural network model is proposed in this research.The study collects data from the Ocular Disease Intelligent Recognition(ODIR)database containing 5,000 patient records.The significant features are extracted and normalized using the min-max normalization technique.In the federated learning-based technique,the VGG16 model is trained on the dataset individually after receiving model updates from two clients.Before transferring the attributes to the global model,the suggested method trains the local model.The global model subsequently improves the technique after integrating the new parameters.Every client analyses the results in three rounds to decrease the over-fitting problem.The experimental result shows the effectiveness of the federated learning-based technique on a Deep Neural Network(DNN),reaching a 95.28%accuracy while also providing privacy to the patient’s data.The experiment demonstrated that the suggested federated learning model outperforms other traditional methods,achieving client 1 accuracy of 95.0%and client 2 accuracy of 96.0%.展开更多
AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 e...AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 eyes) with fresh primary RRD and causative retinal break and vitreous traction were presented. All the patients underwent PPV with air tamponade. Visual acuity(VA) was examined postoperatively and images were captured by ultrawidefield scanning laser ophthalmoscope system(Optos). RESULTS: Initial reattachment was achieved in 25 cases(100%). The air volume was 〉60% on the postoperative day(POD) 1. The ultra-widefield images showed that the retina was reattached in all air-filled eyes postoperatively. The retinal break and laser burns in the superior were detected in 22 of 25 eyes(88%). A missed retinal hole was found under intravitreal air bubble in 1 case(4%). The air volume was range from 40% to 60% on POD 3. A doublelayered image was seen in 25 of 25 eyes with intravitreal gas. Retinal breaks and laser burns around were seen in the intravitreal air. On POD 7, small bubble without effect was seen in 6 cases(24%) and bubble was completely disappeared in 4 cases(16%). Small oval bubble in the superior area was observed in 15 cases(60%). There were no missed and new retinal breaks and no retinal detachment in all cases on the POD 14 and 1 mo and last follow-up. Air disappeared completely on a mean of 9.84 d postoperatively. The mean final postoperative bestcorrected visual acuity(BCVA) was 0.35 log MAR. Mean final postoperative BCVA improved significantly relative to mean preoperative(P〈0.05). Final VA of 0.3 log MAR or better was seen in 13 eyes. CONCLUSION: PPV with air tamponade is an effective management for fresh RRD with superior retinal breaks. The ultra-widefield fundus imaging can detect postoperative retinal breaks in air-filled eyes. It would be a useful facility for follow-up after PPV with air tamponade. Facedown position and acquired visual rehabilitation may be shorten.展开更多
Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it co...Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods(fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques(adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography). Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors(cones and rods), fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.展开更多
Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due t...Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due to many ocular diseases. Among them, diabetic retinopathy is nowadays a chronic disease that attacks most of diabetic patients. Early detection through automatic screening programs reduces considerably expansion of the disease. Exudates are one of the earliest signs. This paper presents an automated method for exudates detection in digital retinal fundus image. The first step consists of image enhancement. It focuses on histogram expansion and median filter. The difference between filtered image and his inverse reduces noise and removes background while preserving features and patterns related to the exudates. The second step refers to blood vessel removal by using morphological operators. In the last step, we compute the result image with an algorithm based on Entropy Maximization Thresholding to obtain two segmented regions (optical disk and exudates) which were highlighted in the second step. Finally, according to size criteria, we eliminate the other regions obtain the regions of interest related to exudates. Evaluations were done with retinal fundus image DIARETDB1 database. DIARETDB1 gathers high-quality medical images which have been verified by experts. It consists of around 89 colour fundus images of which 84 contain at least mild non-proliferative signs of the diabetic retinopathy. This tool provides a unified framework for benchmarking the methods, but also points out clear deficiencies in the current practice in the method development. Comparing to other recent methods available in literature, we found that the proposed algorithm accomplished better result in terms of sensibility (94.27%) and specificity (97.63%).展开更多
I am Dr.Ke Yao,from Eye Center,the Second Affiliated Hospital,School of Medicine,Zhejiang University,Hangzhou,China.I write to present three cases with metastatic choroidal tumor using an ultra-wide-field scanning las...I am Dr.Ke Yao,from Eye Center,the Second Affiliated Hospital,School of Medicine,Zhejiang University,Hangzhou,China.I write to present three cases with metastatic choroidal tumor using an ultra-wide-field scanning laser ophthalmoscope.展开更多
We propose and implement a wide-field vibrational phase contrast detection to obtain imaging of imaginary components of third-order nonlinear susceptibility in a coherent anti-Stokes Raman scattering (CARS) microsco...We propose and implement a wide-field vibrational phase contrast detection to obtain imaging of imaginary components of third-order nonlinear susceptibility in a coherent anti-Stokes Raman scattering (CARS) microscope with full suppression of the non-resonant background. This technique is based on the unique ability of recovering the phase of the generated CARS signal based on holographic recording. By capturing the phase distributions of the generated CARS field from the sample and from the environment under resonant illumination, we demonstrate the retrieval of imaginary components in the CARS microscope and achieve background free coherent Raman imaging.展开更多
Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vi...Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vision ofoptic nerves and quality of life. Classification of Glaucoma has been an active fieldof research for the past ten years. Several approaches for Glaucoma classification areestablished, beginning with conventional segmentation methods and feature-extraction to deep-learning techniques such as Convolution Neural Networks (CNN). Incontrast, CNN classifies the input images directly using tuned parameters of convolution and pooling layers by extracting features. But, the volume of training datasetsdetermines the performance of the CNN;the model trained with small datasets,overfit issues arise. CNN has therefore developed with transfer learning. The primary aim of this study is to explore the potential of EfficientNet with transfer learning for the classification of Glaucoma. The performance of the current workcompares with other models, namely VGG16, InceptionV3, and Xception usingpublic datasets such as RIM-ONEV2 & V3, ORIGA, DRISHTI-GS1, HRF, andACRIMA. The dataset has split into training, validation, and testing with the ratioof 70:15:15. The assessment of the test dataset shows that the pre-trained EfficientNetB4 has achieved the highest performance value compared to other models listedabove. The proposed method achieved 99.38% accuracy and also better results forother metrics, such as sensitivity, specificity, precision, F1_score, Kappa score, andArea Under Curve (AUC) compared to other models.展开更多
Blood vessels in ophthalmoscope images play an important role in diagnosis of some serious pathology on retinal images. Hence, accurate extraction of vessels is becoming a main topic of this research area. In this pap...Blood vessels in ophthalmoscope images play an important role in diagnosis of some serious pathology on retinal images. Hence, accurate extraction of vessels is becoming a main topic of this research area. In this paper, a new hybrid approach called the (Genetic algorithm and vertex chain code) for blood vessel detection. And this method uses geometrical parameters of retinal vascular tree for diagnosing of hypertension and identified retinal exudates automatically from color retinal images. The skeletons of the segmented trees are produced by thinning. Three types of landmarks in the skeleton must be detected: terminal points, bifurcation and crossing points, these points are labeled and stored as a chain code. Results of the proposed system can achieve a diagnostic accuracy with 96.0% sensitivity and 98.4% specificity for the identification of images containing any evidence of retinopathy.展开更多
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
基金Supported by 1.3.5 Project for Disciplines of Excellence,West China Hospital,Sichuan University(No.ZYJC21025).
文摘AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searched three academic databases,including PubMed,Web of Science,and Ovid,with the date of August 2022.We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords,of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images.RESULTS:Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance,including diabetic retinopathy,glaucoma,age-related macular degeneration,retinal vein occlusions,retinal detachment,and other peripheral retinal diseases.Compared to fundus images,the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200°in a single exposure,which can observe more areas of the retina.CONCLUSION:The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.
基金supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City,grant numbers ZXL2021425 and ZXL2022476Doctor of Innovation and Entrepreneurship Program in Jiangsu Province,grant number JSSCBS20211440+6 种基金Jiangsu Province Key R&D Program,grant number BE2019682Natural Science Foundation of Jiangsu Province,grant number BK20200214National Key R&D Program of China,grant number 2017YFB0403701National Natural Science Foundation of China,grant numbers 61605210,61675226,and 62075235Youth Innovation Promotion Association of Chinese Academy of Sciences,grant number 2019320Frontier Science Research Project of the Chinese Academy of Sciences,grant number QYZDB-SSW-JSC03Strategic Priority Research Program of the Chinese Academy of Sciences,grant number XDB02060000.
文摘The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.
文摘Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untreated.Automa-tion of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also.Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted fea-ture selection or binary classification.This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images.For this research,the data has been collected and combined from three distinct sources.The images are preprocessed for enhancing the details.Six layers of the convolutional neural network(CNN)are used for the automated feature extraction and classification of 20 retinal diseases.It is observed that the results are reliant on the number of classes.For binary classification(healthy vs.unhealthy),up to 100%accuracy has been achieved.When 16 classes are used(treating stages of a disease as a single class),93.3%accuracy,92%sensitivity and 93%specificity have been obtained respectively.For 20 classes(treating stages of the disease as separate classes),the accuracy,sensitivity and specificity have dropped to 92.4%,92%and 92%respectively.
文摘In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for this purpose,as well as for analysing eye abnormalities and diagnosing eye illnesses.Exudates can be recognised as bright lesions in fundus pictures,which can be thefirst indicator of diabetic retinopathy.With that in mind,the purpose of this work is to create an Integrated Model for Exudate and Diabetic Retinopathy Diagnosis(IM-EDRD)with multi-level classifications.The model uses Support Vector Machine(SVM)-based classification to separate normal and abnormal fundus images at thefirst level.The input pictures for SVM are pre-processed with Green Channel Extraction and the retrieved features are based on Gray Level Co-occurrence Matrix(GLCM).Furthermore,the presence of Exudate and Diabetic Retinopathy(DR)in fundus images is detected using the Adaptive Neuro Fuzzy Inference System(ANFIS)classifier at the second level of classification.Exudate detection,blood vessel extraction,and Optic Disc(OD)detection are all processed to achieve suitable results.Furthermore,the second level processing comprises Morphological Component Analysis(MCA)based image enhancement and object segmentation processes,as well as feature extraction for training the ANFIS classifier,to reliably diagnose DR.Furthermore,thefindings reveal that the proposed model surpasses existing models in terms of accuracy,time efficiency,and precision rate with the lowest possible error rate.
基金the National Natural Science Foundation of China(No.62276210,82201148,61775180)the Natural Science Basic Research Program of Shaanxi Province(No.2022JM-380)+3 种基金the Shaanxi Province College Students'Innovation and Entrepreneurship Training Program(No.S202311664128X)the Natural Science Foundation of Zhejiang Province(No.LQ22H120002)the Medical Health Science and Technology Project of Zhejiang Province(No.2022RC069,2023KY1140)the Natural Science Foundation of Ningbo(No.2023J390)。
文摘Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,a deep learning-based automated grading system of visual impairment in cataract patients is proposed using a multi-scale efficient channel attention convolutional neural network(MECA_CNN).First,the efficient channel attention mechanism is applied in the MECA_CNN to extract multi-scale features of fundus images,which can effectively focus on lesion-related regions.Then,the asymmetric convolutional modules are embedded in the residual unit to reduce the infor-mation loss of fine-grained features in fundus images.In addition,the asymmetric loss function is applied to address the problem of a higher false-negative rate and weak generalization ability caused by the imbalanced dataset.A total of 7299 fundus images derived from two clinical centers are em-ployed to develop and evaluate the MECA_CNN for identifying mild visual impairment caused by cataract(MVICC),moderate to severe visual impairment caused by cataract(MSVICC),and nor-mal sample.The experimental results demonstrate that the MECA_CNN provides clinically meaning-ful performance for visual impairment grading in the internal test dataset:MVICC(accuracy,sensi-tivity,and specificity;91.3%,89.9%,and 92%),MSVICC(93.2%,78.5%,and 96.7%),and normal sample(98.1%,98.0%,and 98.1%).The comparable performance in the external test dataset is achieved,further verifying the effectiveness and generalizability of the MECA_CNN model.This study provides a deep learning-based practical system for the automated grading of visu-al impairment in cataract patients,facilitating the formulation of treatment strategies in a timely man-ner and improving patients’vision prognosis.
基金Supported by the National High Technology Research and Development Program of China(863 Program)(No.2006AA020804)Fundamental Research Funds for the Central Universities(No.NJ20120007)+2 种基金Jiangsu Province Science and Technology Support Plan(No.BE2010652)Program Sponsored for Scientific Innovation Research of College Graduate in Jangsu Province(No.CXLX11_0218)Shanghai University Scientific Selection and Cultivation for Outstanding Young Teachers in Special Fund(No.ZZGCD15081)
文摘Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the disease. To achieve this goal, an automatically detecting approach based on improved FCM(IFCM) as well as support vector machines(SVM) was established and studied. Firstly, color fundus images were segmented by IFCM, and candidate regions of EXs were obtained. Then, the SVM classifier is confirmed with the optimal subset of features and judgments of these candidate regions, as a result hard exudates are detected from fundus images. Our database was composed of 126 images with variable color, brightness, and quality. 70 of them were used to train the SVM and the remaining 56 to assess the performance of the method. Using a lesion based criterion, we achieved a mean sensitivity of 94.65% and a mean positive predictive value of 97.25%. With an image-based criterion, our approach reached a 100% mean sensitivity, 96.43% mean specificity and 98.21% mean accuracy. Furthermore, the average time cost in processing an image is 4.56 s. The results suggest that the proposed method can efficiently detect EXs from color fundus images and it could be a diagnostic aid for ophthalmologists in the screening for DR.
基金National High Technology Research and Development Program of China(863 Program)(No.2006AA020804)Fundamental Research Funds for the Central Universities,China(No.NJ20120007)+2 种基金Jiangsu Province Science and Technology Support Plan,China(No.BE2010652)Program Sponsored for Scientific Innovation Research of College Graduate in Jangsu Province,China(No.CXLX11_0218)Shanghai University Scientific Selection and Cultivation for Outstanding Young Teachers in Special Fund,China(No.ZZGCD15081)。
文摘Optic disc(OD)detection is a main step while developing automated screening systems for diabetic retinopathy.We present a method to automatically locate and extract the OD in digital retinal fundus images.Based on the property that main blood vessels gather in OD,the method starts with Otsu thresholding segmentation to obtain candidate regions of OD.Consequently,the main blood vessels which are segmented in H channel of color fundus images in Hue saturation value(HSV)space.Finally,a weighted vessels’direction matched filter is proposed to roughly match the direction of the main blood vessels to get the OD center which is used to pick the true OD out from the candidate regions of OD.The proposed method was evaluated on a dataset containing 100 fundus images of both normal and diseased retinas and the accuracy reaches 98%.Furthermore,the average time cost in processing an image is 1.3 s.Results suggest that the approach is reliable,and can efficiently detect OD from fundus images.
基金supported by the National Natural Science Foundation of China under Grant No.61772118.
文摘The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused by glaucoma cannot be reversed,but the loss may be avoided if screened in the early stage of glaucoma.Thus,early screening of glaucoma is very requisite to preserve vision and maintain quality of life.Optic nerve head(ONH)assessment is a useful and practical technique among current glaucoma screening methods.Vertical CDR as one of the clinical indicators for ONH assessment,has been well-used by clinicians and professionals for the analysis and diagnosis of glaucoma.The key for automatic calculation of vertical CDR in fundus images is the segmentation of optic cup(OC)and optic disc(OD).We take a brief description of methodologies about the OC and disc optic segmentation and comprehensively presented these methods as two aspects:hand-craft feature and deep learning feature.Sliding window regression,super-pixel level,image reconstruction,super-pixel level low-rank representation(LRR),deep learning methodologies for segmentation of OD and OC have been shown.It is hoped that this paper can provide guidance and bring inspiration to other researchers.Every mentioned method has its advantages and limitations.Appropriate method should be selected or explored according to the actual situation.For automatic glaucoma screening,CDR is just the reflection for a small part of the disc,while utilizing comprehensive factors or multimodal images is the promising future direction to furthermore enhance the performance.
基金Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through Project Number 959.
文摘A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been addressed recently,such as smartphone apps used for remote health monitoring and eye treatment.In recent years,advances in diagnosis,prediction,and clinical decision support using Artificial Intelligence(AI)in medicine and ophthalmology have been exponential.Due to privacy concerns,a lack of data makes applying artificial intelligence models in the medical field challenging.To address this issue,a federated learning framework named CDFL based on a VGG16 deep neural network model is proposed in this research.The study collects data from the Ocular Disease Intelligent Recognition(ODIR)database containing 5,000 patient records.The significant features are extracted and normalized using the min-max normalization technique.In the federated learning-based technique,the VGG16 model is trained on the dataset individually after receiving model updates from two clients.Before transferring the attributes to the global model,the suggested method trains the local model.The global model subsequently improves the technique after integrating the new parameters.Every client analyses the results in three rounds to decrease the over-fitting problem.The experimental result shows the effectiveness of the federated learning-based technique on a Deep Neural Network(DNN),reaching a 95.28%accuracy while also providing privacy to the patient’s data.The experiment demonstrated that the suggested federated learning model outperforms other traditional methods,achieving client 1 accuracy of 95.0%and client 2 accuracy of 96.0%.
文摘AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 eyes) with fresh primary RRD and causative retinal break and vitreous traction were presented. All the patients underwent PPV with air tamponade. Visual acuity(VA) was examined postoperatively and images were captured by ultrawidefield scanning laser ophthalmoscope system(Optos). RESULTS: Initial reattachment was achieved in 25 cases(100%). The air volume was 〉60% on the postoperative day(POD) 1. The ultra-widefield images showed that the retina was reattached in all air-filled eyes postoperatively. The retinal break and laser burns in the superior were detected in 22 of 25 eyes(88%). A missed retinal hole was found under intravitreal air bubble in 1 case(4%). The air volume was range from 40% to 60% on POD 3. A doublelayered image was seen in 25 of 25 eyes with intravitreal gas. Retinal breaks and laser burns around were seen in the intravitreal air. On POD 7, small bubble without effect was seen in 6 cases(24%) and bubble was completely disappeared in 4 cases(16%). Small oval bubble in the superior area was observed in 15 cases(60%). There were no missed and new retinal breaks and no retinal detachment in all cases on the POD 14 and 1 mo and last follow-up. Air disappeared completely on a mean of 9.84 d postoperatively. The mean final postoperative bestcorrected visual acuity(BCVA) was 0.35 log MAR. Mean final postoperative BCVA improved significantly relative to mean preoperative(P〈0.05). Final VA of 0.3 log MAR or better was seen in 13 eyes. CONCLUSION: PPV with air tamponade is an effective management for fresh RRD with superior retinal breaks. The ultra-widefield fundus imaging can detect postoperative retinal breaks in air-filled eyes. It would be a useful facility for follow-up after PPV with air tamponade. Facedown position and acquired visual rehabilitation may be shorten.
基金Supported by National Key Scientific Instrument and Equipment Development Project of China (No.2012YQ12008005)
文摘Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods(fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques(adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography). Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors(cones and rods), fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.
文摘Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due to many ocular diseases. Among them, diabetic retinopathy is nowadays a chronic disease that attacks most of diabetic patients. Early detection through automatic screening programs reduces considerably expansion of the disease. Exudates are one of the earliest signs. This paper presents an automated method for exudates detection in digital retinal fundus image. The first step consists of image enhancement. It focuses on histogram expansion and median filter. The difference between filtered image and his inverse reduces noise and removes background while preserving features and patterns related to the exudates. The second step refers to blood vessel removal by using morphological operators. In the last step, we compute the result image with an algorithm based on Entropy Maximization Thresholding to obtain two segmented regions (optical disk and exudates) which were highlighted in the second step. Finally, according to size criteria, we eliminate the other regions obtain the regions of interest related to exudates. Evaluations were done with retinal fundus image DIARETDB1 database. DIARETDB1 gathers high-quality medical images which have been verified by experts. It consists of around 89 colour fundus images of which 84 contain at least mild non-proliferative signs of the diabetic retinopathy. This tool provides a unified framework for benchmarking the methods, but also points out clear deficiencies in the current practice in the method development. Comparing to other recent methods available in literature, we found that the proposed algorithm accomplished better result in terms of sensibility (94.27%) and specificity (97.63%).
基金Supported by Zhejiang Natural Science Foundation Project of China (No.LY18H120001)
文摘I am Dr.Ke Yao,from Eye Center,the Second Affiliated Hospital,School of Medicine,Zhejiang University,Hangzhou,China.I write to present three cases with metastatic choroidal tumor using an ultra-wide-field scanning laser ophthalmoscope.
基金Supported by the National Natural Science Foundation of China under Grant Nos 11174019,61322509 and 11121091the National Basic Research Program of China under Grant No 2013CB921904
文摘We propose and implement a wide-field vibrational phase contrast detection to obtain imaging of imaginary components of third-order nonlinear susceptibility in a coherent anti-Stokes Raman scattering (CARS) microscope with full suppression of the non-resonant background. This technique is based on the unique ability of recovering the phase of the generated CARS signal based on holographic recording. By capturing the phase distributions of the generated CARS field from the sample and from the environment under resonant illumination, we demonstrate the retrieval of imaginary components in the CARS microscope and achieve background free coherent Raman imaging.
文摘Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vision ofoptic nerves and quality of life. Classification of Glaucoma has been an active fieldof research for the past ten years. Several approaches for Glaucoma classification areestablished, beginning with conventional segmentation methods and feature-extraction to deep-learning techniques such as Convolution Neural Networks (CNN). Incontrast, CNN classifies the input images directly using tuned parameters of convolution and pooling layers by extracting features. But, the volume of training datasetsdetermines the performance of the CNN;the model trained with small datasets,overfit issues arise. CNN has therefore developed with transfer learning. The primary aim of this study is to explore the potential of EfficientNet with transfer learning for the classification of Glaucoma. The performance of the current workcompares with other models, namely VGG16, InceptionV3, and Xception usingpublic datasets such as RIM-ONEV2 & V3, ORIGA, DRISHTI-GS1, HRF, andACRIMA. The dataset has split into training, validation, and testing with the ratioof 70:15:15. The assessment of the test dataset shows that the pre-trained EfficientNetB4 has achieved the highest performance value compared to other models listedabove. The proposed method achieved 99.38% accuracy and also better results forother metrics, such as sensitivity, specificity, precision, F1_score, Kappa score, andArea Under Curve (AUC) compared to other models.
文摘Blood vessels in ophthalmoscope images play an important role in diagnosis of some serious pathology on retinal images. Hence, accurate extraction of vessels is becoming a main topic of this research area. In this paper, a new hybrid approach called the (Genetic algorithm and vertex chain code) for blood vessel detection. And this method uses geometrical parameters of retinal vascular tree for diagnosing of hypertension and identified retinal exudates automatically from color retinal images. The skeletons of the segmented trees are produced by thinning. Three types of landmarks in the skeleton must be detected: terminal points, bifurcation and crossing points, these points are labeled and stored as a chain code. Results of the proposed system can achieve a diagnostic accuracy with 96.0% sensitivity and 98.4% specificity for the identification of images containing any evidence of retinopathy.