In ophthalmology,the quality of fundus images is critical for accurate diagnosis,both in clinical practice and in artificial intelligence(AI)-assisted diagnostics.Despite the broad view provided by ultrawide-field(UWF...In ophthalmology,the quality of fundus images is critical for accurate diagnosis,both in clinical practice and in artificial intelligence(AI)-assisted diagnostics.Despite the broad view provided by ultrawide-field(UWF)imaging,pseudocolor images may conceal critical lesions necessary for precise diagnosis.To address this,we introduce UWF-Net,a sophisticated image enhancement algorithm that takes disease characteristics into consideration.Using the Fudan University ultra-wide-field image(FDUWI)dataset,which includes 11294 Optos pseudocolor and 2415 Zeiss true-color UWF images,each of which is rigorously annotated,UWF-Net combines global style modeling with feature-level lesion enhancement.Pathological consistency loss is also applied to maintain fundus feature integrity,significantly improving image quality.Quantitative and qualitative evaluations demonstrated that UWF-Net outperforms existing methods such as contrast limited adaptive histogram equalization(CLAHE)and structure and illumination constrained generative adversarial network(StillGAN),delivering superior retinal image quality,higher quality scores,and preserved feature details after enhancement.In disease classification tasks,images enhanced by UWF-Net showed notable improvements when processed with existing classification systems over those enhanced by StillGAN,demonstrating a 4.62%increase in sensitivity(SEN)and a 3.97%increase in accuracy(ACC).In a multicenter clinical setting,UWF-Net-enhanced images were preferred by ophthalmologic technicians and doctors,and yielded a significant reduction in diagnostic time((13.17±8.40)s for UWF-Net enhanced images vs(19.54±12.40)s for original images)and an increase in diagnostic accuracy(87.71%for UWF-Net enhanced images vs 80.40%for original images).Our research verifies that UWF-Net markedly improves the quality of UWF imaging,facilitating better clinical outcomes and more reliable AI-assisted disease classification.The clinical integration of UWF-Net holds great promise for enhancing diagnostic processes and patient care in ophthalmology.展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im...The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.展开更多
AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searche...AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searched three academic databases,including PubMed,Web of Science,and Ovid,with the date of August 2022.We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords,of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images.RESULTS:Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance,including diabetic retinopathy,glaucoma,age-related macular degeneration,retinal vein occlusions,retinal detachment,and other peripheral retinal diseases.Compared to fundus images,the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200°in a single exposure,which can observe more areas of the retina.CONCLUSION:The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.展开更多
Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untr...Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untreated.Automa-tion of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also.Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted fea-ture selection or binary classification.This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images.For this research,the data has been collected and combined from three distinct sources.The images are preprocessed for enhancing the details.Six layers of the convolutional neural network(CNN)are used for the automated feature extraction and classification of 20 retinal diseases.It is observed that the results are reliant on the number of classes.For binary classification(healthy vs.unhealthy),up to 100%accuracy has been achieved.When 16 classes are used(treating stages of a disease as a single class),93.3%accuracy,92%sensitivity and 93%specificity have been obtained respectively.For 20 classes(treating stages of the disease as separate classes),the accuracy,sensitivity and specificity have dropped to 92.4%,92%and 92%respectively.展开更多
In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for...In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for this purpose,as well as for analysing eye abnormalities and diagnosing eye illnesses.Exudates can be recognised as bright lesions in fundus pictures,which can be thefirst indicator of diabetic retinopathy.With that in mind,the purpose of this work is to create an Integrated Model for Exudate and Diabetic Retinopathy Diagnosis(IM-EDRD)with multi-level classifications.The model uses Support Vector Machine(SVM)-based classification to separate normal and abnormal fundus images at thefirst level.The input pictures for SVM are pre-processed with Green Channel Extraction and the retrieved features are based on Gray Level Co-occurrence Matrix(GLCM).Furthermore,the presence of Exudate and Diabetic Retinopathy(DR)in fundus images is detected using the Adaptive Neuro Fuzzy Inference System(ANFIS)classifier at the second level of classification.Exudate detection,blood vessel extraction,and Optic Disc(OD)detection are all processed to achieve suitable results.Furthermore,the second level processing comprises Morphological Component Analysis(MCA)based image enhancement and object segmentation processes,as well as feature extraction for training the ANFIS classifier,to reliably diagnose DR.Furthermore,thefindings reveal that the proposed model surpasses existing models in terms of accuracy,time efficiency,and precision rate with the lowest possible error rate.展开更多
Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,...Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,a deep learning-based automated grading system of visual impairment in cataract patients is proposed using a multi-scale efficient channel attention convolutional neural network(MECA_CNN).First,the efficient channel attention mechanism is applied in the MECA_CNN to extract multi-scale features of fundus images,which can effectively focus on lesion-related regions.Then,the asymmetric convolutional modules are embedded in the residual unit to reduce the infor-mation loss of fine-grained features in fundus images.In addition,the asymmetric loss function is applied to address the problem of a higher false-negative rate and weak generalization ability caused by the imbalanced dataset.A total of 7299 fundus images derived from two clinical centers are em-ployed to develop and evaluate the MECA_CNN for identifying mild visual impairment caused by cataract(MVICC),moderate to severe visual impairment caused by cataract(MSVICC),and nor-mal sample.The experimental results demonstrate that the MECA_CNN provides clinically meaning-ful performance for visual impairment grading in the internal test dataset:MVICC(accuracy,sensi-tivity,and specificity;91.3%,89.9%,and 92%),MSVICC(93.2%,78.5%,and 96.7%),and normal sample(98.1%,98.0%,and 98.1%).The comparable performance in the external test dataset is achieved,further verifying the effectiveness and generalizability of the MECA_CNN model.This study provides a deep learning-based practical system for the automated grading of visu-al impairment in cataract patients,facilitating the formulation of treatment strategies in a timely man-ner and improving patients’vision prognosis.展开更多
AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 e...AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 eyes) with fresh primary RRD and causative retinal break and vitreous traction were presented. All the patients underwent PPV with air tamponade. Visual acuity(VA) was examined postoperatively and images were captured by ultrawidefield scanning laser ophthalmoscope system(Optos). RESULTS: Initial reattachment was achieved in 25 cases(100%). The air volume was 〉60% on the postoperative day(POD) 1. The ultra-widefield images showed that the retina was reattached in all air-filled eyes postoperatively. The retinal break and laser burns in the superior were detected in 22 of 25 eyes(88%). A missed retinal hole was found under intravitreal air bubble in 1 case(4%). The air volume was range from 40% to 60% on POD 3. A doublelayered image was seen in 25 of 25 eyes with intravitreal gas. Retinal breaks and laser burns around were seen in the intravitreal air. On POD 7, small bubble without effect was seen in 6 cases(24%) and bubble was completely disappeared in 4 cases(16%). Small oval bubble in the superior area was observed in 15 cases(60%). There were no missed and new retinal breaks and no retinal detachment in all cases on the POD 14 and 1 mo and last follow-up. Air disappeared completely on a mean of 9.84 d postoperatively. The mean final postoperative bestcorrected visual acuity(BCVA) was 0.35 log MAR. Mean final postoperative BCVA improved significantly relative to mean preoperative(P〈0.05). Final VA of 0.3 log MAR or better was seen in 13 eyes. CONCLUSION: PPV with air tamponade is an effective management for fresh RRD with superior retinal breaks. The ultra-widefield fundus imaging can detect postoperative retinal breaks in air-filled eyes. It would be a useful facility for follow-up after PPV with air tamponade. Facedown position and acquired visual rehabilitation may be shorten.展开更多
Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it co...Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods(fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques(adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography). Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors(cones and rods), fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.展开更多
Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the d...Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the disease. To achieve this goal, an automatically detecting approach based on improved FCM(IFCM) as well as support vector machines(SVM) was established and studied. Firstly, color fundus images were segmented by IFCM, and candidate regions of EXs were obtained. Then, the SVM classifier is confirmed with the optimal subset of features and judgments of these candidate regions, as a result hard exudates are detected from fundus images. Our database was composed of 126 images with variable color, brightness, and quality. 70 of them were used to train the SVM and the remaining 56 to assess the performance of the method. Using a lesion based criterion, we achieved a mean sensitivity of 94.65% and a mean positive predictive value of 97.25%. With an image-based criterion, our approach reached a 100% mean sensitivity, 96.43% mean specificity and 98.21% mean accuracy. Furthermore, the average time cost in processing an image is 4.56 s. The results suggest that the proposed method can efficiently detect EXs from color fundus images and it could be a diagnostic aid for ophthalmologists in the screening for DR.展开更多
Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due t...Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due to many ocular diseases. Among them, diabetic retinopathy is nowadays a chronic disease that attacks most of diabetic patients. Early detection through automatic screening programs reduces considerably expansion of the disease. Exudates are one of the earliest signs. This paper presents an automated method for exudates detection in digital retinal fundus image. The first step consists of image enhancement. It focuses on histogram expansion and median filter. The difference between filtered image and his inverse reduces noise and removes background while preserving features and patterns related to the exudates. The second step refers to blood vessel removal by using morphological operators. In the last step, we compute the result image with an algorithm based on Entropy Maximization Thresholding to obtain two segmented regions (optical disk and exudates) which were highlighted in the second step. Finally, according to size criteria, we eliminate the other regions obtain the regions of interest related to exudates. Evaluations were done with retinal fundus image DIARETDB1 database. DIARETDB1 gathers high-quality medical images which have been verified by experts. It consists of around 89 colour fundus images of which 84 contain at least mild non-proliferative signs of the diabetic retinopathy. This tool provides a unified framework for benchmarking the methods, but also points out clear deficiencies in the current practice in the method development. Comparing to other recent methods available in literature, we found that the proposed algorithm accomplished better result in terms of sensibility (94.27%) and specificity (97.63%).展开更多
Optic disc(OD)detection is a main step while developing automated screening systems for diabetic retinopathy.We present a method to automatically locate and extract the OD in digital retinal fundus images.Based on the...Optic disc(OD)detection is a main step while developing automated screening systems for diabetic retinopathy.We present a method to automatically locate and extract the OD in digital retinal fundus images.Based on the property that main blood vessels gather in OD,the method starts with Otsu thresholding segmentation to obtain candidate regions of OD.Consequently,the main blood vessels which are segmented in H channel of color fundus images in Hue saturation value(HSV)space.Finally,a weighted vessels’direction matched filter is proposed to roughly match the direction of the main blood vessels to get the OD center which is used to pick the true OD out from the candidate regions of OD.The proposed method was evaluated on a dataset containing 100 fundus images of both normal and diseased retinas and the accuracy reaches 98%.Furthermore,the average time cost in processing an image is 1.3 s.Results suggest that the approach is reliable,and can efficiently detect OD from fundus images.展开更多
The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused ...The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused by glaucoma cannot be reversed,but the loss may be avoided if screened in the early stage of glaucoma.Thus,early screening of glaucoma is very requisite to preserve vision and maintain quality of life.Optic nerve head(ONH)assessment is a useful and practical technique among current glaucoma screening methods.Vertical CDR as one of the clinical indicators for ONH assessment,has been well-used by clinicians and professionals for the analysis and diagnosis of glaucoma.The key for automatic calculation of vertical CDR in fundus images is the segmentation of optic cup(OC)and optic disc(OD).We take a brief description of methodologies about the OC and disc optic segmentation and comprehensively presented these methods as two aspects:hand-craft feature and deep learning feature.Sliding window regression,super-pixel level,image reconstruction,super-pixel level low-rank representation(LRR),deep learning methodologies for segmentation of OD and OC have been shown.It is hoped that this paper can provide guidance and bring inspiration to other researchers.Every mentioned method has its advantages and limitations.Appropriate method should be selected or explored according to the actual situation.For automatic glaucoma screening,CDR is just the reflection for a small part of the disc,while utilizing comprehensive factors or multimodal images is the promising future direction to furthermore enhance the performance.展开更多
AIM:To compare choroidal neovascularization(CNV)lesion measurements obtained by in vivo imaging modalities,with whole mount histological preparations stained with isolectin GS-IB4,using a murine laser-induced CNV mode...AIM:To compare choroidal neovascularization(CNV)lesion measurements obtained by in vivo imaging modalities,with whole mount histological preparations stained with isolectin GS-IB4,using a murine laser-induced CNV model.METHODS:B6 N.Cg-Tg(Csf1 r-EGFP)1 Hume/J heterozygous adult mice were subjected to laser-induced CNV and were monitored by fluorescein angiography(FA),multicolor(MC)fundus imaging and optical coherence tomography angiography(OCTA)at day 14 after CNV induction.Choroidalretinal pigment epithelium(RPE)whole mounts were prepared at the end of the experiment and were stained with isolectin GS-IB4.CNV areas were measured in all different imaging modalities at day 14 after CNV from three independent raters and were compared to choroidal-RPE whole mounts.Intraclass correlation coefficient(ICC)type 2(2-way random model)and its 95%confidence intervals(CI)were calculated to measure the correlation between different raters’measurements.Spearman’s rank correlation coefficient(Spearman’s r)was calculated for the comparison between FA,MC and OCTA data and histology data.RESULTS:FA(early and late)and MC correlates well with the CNV measurements ex vivo with FA having slightly better correlation than MC(FA early Spearman’s r=0.7642,FA late Spearman’s r=0.7097,and MC Spearman’s r=0.7418),while the interobser ver reliability was good for both techniques(FA early ICC=0.976,FA late ICC=0.964,and MC ICC=0.846).In contrast,OCTA showed a poor correlation with ex vivo measurements(Spearman’s r=0.05716)and high variability between different raters(ICC=0.603).CONCLUSION:This study suggests that FA and MC imaging could be used for the evaluation of CNV areas in vivo while caution must be taken and comparison studies should be performed when OCTA is employed as a CNV monitoring tool in small rodents.展开更多
Purpose:To explore a clear retinal imaging and output and enhance the development of retinopathy of prematurity (ROP) screening,which is safe and effective for ROP screening in premature infants. Methods:A computer-as...Purpose:To explore a clear retinal imaging and output and enhance the development of retinopathy of prematurity (ROP) screening,which is safe and effective for ROP screening in premature infants. Methods:A computer-assisted binocular indirect ophthalmoscope imaging and output system was equipped with camera and image processing hardware and connected to computers. The process of fundus examination was videotaped (photograph) and output. Simulated eyes were utilized to debug video head and acquire stable and clear fundus images by binocular indirect ophthalmoscope for premature infants. Results:Fundus imaging output technique was sucessfully established. The common reasons of unclear imaging and corresponding solutions were summarized. This technique can capture and output stable and clear fundus images of premature infants. Conclusion: Assisted by hardware and software processing, a compute assisted binocular indirect ophthalmoscope imaging and output system was established,which can be used for screening, research, treatment and follow-up of ROP in premature babies to resolve the difficulty in obtaining clear fundus photograph.展开更多
AIM: To evaluate the value of ultra-wide field(UWF) imaging in the management of traumatic retinopathy under the condition of corneal scar or fixed small pupil after complicated ocular trauma. METHODS: Twenty-eigh...AIM: To evaluate the value of ultra-wide field(UWF) imaging in the management of traumatic retinopathy under the condition of corneal scar or fixed small pupil after complicated ocular trauma. METHODS: Twenty-eight patients(28 eyes) with complicated ocular trauma were enrolled in the study from June 2016 to May 2017, including 19 males and 9 females with age ranged from 11 to 64(43.42±12.62)y. All patients were treated with secondary vitrectomy after emergency operation for wound repair of open ocular trauma. Direct ophthalmoscopy and 45-degree fundus photography were taken at each time point of follow up for comparison of findings with UWF images. Routine eye examination including visual acuity, intraocular pressure, slit lamp examination were performed and analyzed as well.RESULTS: Among the 28 traumatized eyes, the positive rate for identification of traumatic retinopathed was 32.1%(9 cases), 14.9%(5 cases), and 85.7%(24 cases) with direct ophthalmoscopy, 45-degree fundus photography, and UWF imaging, respectively. The detective rate of UWF imaging under the condition of corneal scar or fixed small pupil was statistically greater than that of 45-degree fundus photography and direct ophthalmoscopy(Bonferroni correction, P〈0.001). UWF image was obtained in 19 eyes with opaque corneal scar, otherwise their fundus could not be seen by conventional methods. The additional findings of traumatic retinopathies by UWF imaging included periretinal membranes or pre-retinal proliferating strip, retinal holes, hemorrhage in the vitreous or sub-retinal space.CONCLUSION: UWF imaging is superior to traditional fundus photography in the evaluation of traumatic retinopathies under the condition of corneal scar or fixed small pupil after complicated ocular trauma.展开更多
A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been...A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been addressed recently,such as smartphone apps used for remote health monitoring and eye treatment.In recent years,advances in diagnosis,prediction,and clinical decision support using Artificial Intelligence(AI)in medicine and ophthalmology have been exponential.Due to privacy concerns,a lack of data makes applying artificial intelligence models in the medical field challenging.To address this issue,a federated learning framework named CDFL based on a VGG16 deep neural network model is proposed in this research.The study collects data from the Ocular Disease Intelligent Recognition(ODIR)database containing 5,000 patient records.The significant features are extracted and normalized using the min-max normalization technique.In the federated learning-based technique,the VGG16 model is trained on the dataset individually after receiving model updates from two clients.Before transferring the attributes to the global model,the suggested method trains the local model.The global model subsequently improves the technique after integrating the new parameters.Every client analyses the results in three rounds to decrease the over-fitting problem.The experimental result shows the effectiveness of the federated learning-based technique on a Deep Neural Network(DNN),reaching a 95.28%accuracy while also providing privacy to the patient’s data.The experiment demonstrated that the suggested federated learning model outperforms other traditional methods,achieving client 1 accuracy of 95.0%and client 2 accuracy of 96.0%.展开更多
I am Dr.Ke Yao,from Eye Center,the Second Affiliated Hospital,School of Medicine,Zhejiang University,Hangzhou,China.I write to present three cases with metastatic choroidal tumor using an ultra-wide-field scanning las...I am Dr.Ke Yao,from Eye Center,the Second Affiliated Hospital,School of Medicine,Zhejiang University,Hangzhou,China.I write to present three cases with metastatic choroidal tumor using an ultra-wide-field scanning laser ophthalmoscope.展开更多
基金supported by the National Natural Science Foundation of China(82020108006 and 81730025 to Chen Zhao,U2001209 to Bo Yan)the Excellent Academic Leaders of Shanghai(18XD1401000 to Chen Zhao)the Natural Science Foundation of Shanghai,China(21ZR1406600 to Weimin Tan).
文摘In ophthalmology,the quality of fundus images is critical for accurate diagnosis,both in clinical practice and in artificial intelligence(AI)-assisted diagnostics.Despite the broad view provided by ultrawide-field(UWF)imaging,pseudocolor images may conceal critical lesions necessary for precise diagnosis.To address this,we introduce UWF-Net,a sophisticated image enhancement algorithm that takes disease characteristics into consideration.Using the Fudan University ultra-wide-field image(FDUWI)dataset,which includes 11294 Optos pseudocolor and 2415 Zeiss true-color UWF images,each of which is rigorously annotated,UWF-Net combines global style modeling with feature-level lesion enhancement.Pathological consistency loss is also applied to maintain fundus feature integrity,significantly improving image quality.Quantitative and qualitative evaluations demonstrated that UWF-Net outperforms existing methods such as contrast limited adaptive histogram equalization(CLAHE)and structure and illumination constrained generative adversarial network(StillGAN),delivering superior retinal image quality,higher quality scores,and preserved feature details after enhancement.In disease classification tasks,images enhanced by UWF-Net showed notable improvements when processed with existing classification systems over those enhanced by StillGAN,demonstrating a 4.62%increase in sensitivity(SEN)and a 3.97%increase in accuracy(ACC).In a multicenter clinical setting,UWF-Net-enhanced images were preferred by ophthalmologic technicians and doctors,and yielded a significant reduction in diagnostic time((13.17±8.40)s for UWF-Net enhanced images vs(19.54±12.40)s for original images)and an increase in diagnostic accuracy(87.71%for UWF-Net enhanced images vs 80.40%for original images).Our research verifies that UWF-Net markedly improves the quality of UWF imaging,facilitating better clinical outcomes and more reliable AI-assisted disease classification.The clinical integration of UWF-Net holds great promise for enhancing diagnostic processes and patient care in ophthalmology.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
基金supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City,grant numbers ZXL2021425 and ZXL2022476Doctor of Innovation and Entrepreneurship Program in Jiangsu Province,grant number JSSCBS20211440+6 种基金Jiangsu Province Key R&D Program,grant number BE2019682Natural Science Foundation of Jiangsu Province,grant number BK20200214National Key R&D Program of China,grant number 2017YFB0403701National Natural Science Foundation of China,grant numbers 61605210,61675226,and 62075235Youth Innovation Promotion Association of Chinese Academy of Sciences,grant number 2019320Frontier Science Research Project of the Chinese Academy of Sciences,grant number QYZDB-SSW-JSC03Strategic Priority Research Program of the Chinese Academy of Sciences,grant number XDB02060000.
文摘The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.
基金Supported by 1.3.5 Project for Disciplines of Excellence,West China Hospital,Sichuan University(No.ZYJC21025).
文摘AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searched three academic databases,including PubMed,Web of Science,and Ovid,with the date of August 2022.We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords,of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images.RESULTS:Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance,including diabetic retinopathy,glaucoma,age-related macular degeneration,retinal vein occlusions,retinal detachment,and other peripheral retinal diseases.Compared to fundus images,the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200°in a single exposure,which can observe more areas of the retina.CONCLUSION:The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.
文摘Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untreated.Automa-tion of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also.Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted fea-ture selection or binary classification.This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images.For this research,the data has been collected and combined from three distinct sources.The images are preprocessed for enhancing the details.Six layers of the convolutional neural network(CNN)are used for the automated feature extraction and classification of 20 retinal diseases.It is observed that the results are reliant on the number of classes.For binary classification(healthy vs.unhealthy),up to 100%accuracy has been achieved.When 16 classes are used(treating stages of a disease as a single class),93.3%accuracy,92%sensitivity and 93%specificity have been obtained respectively.For 20 classes(treating stages of the disease as separate classes),the accuracy,sensitivity and specificity have dropped to 92.4%,92%and 92%respectively.
文摘In recent years,there has been a significant increase in the number of people suffering from eye illnesses,which should be treated as soon as possible in order to avoid blindness.Retinal Fundus images are employed for this purpose,as well as for analysing eye abnormalities and diagnosing eye illnesses.Exudates can be recognised as bright lesions in fundus pictures,which can be thefirst indicator of diabetic retinopathy.With that in mind,the purpose of this work is to create an Integrated Model for Exudate and Diabetic Retinopathy Diagnosis(IM-EDRD)with multi-level classifications.The model uses Support Vector Machine(SVM)-based classification to separate normal and abnormal fundus images at thefirst level.The input pictures for SVM are pre-processed with Green Channel Extraction and the retrieved features are based on Gray Level Co-occurrence Matrix(GLCM).Furthermore,the presence of Exudate and Diabetic Retinopathy(DR)in fundus images is detected using the Adaptive Neuro Fuzzy Inference System(ANFIS)classifier at the second level of classification.Exudate detection,blood vessel extraction,and Optic Disc(OD)detection are all processed to achieve suitable results.Furthermore,the second level processing comprises Morphological Component Analysis(MCA)based image enhancement and object segmentation processes,as well as feature extraction for training the ANFIS classifier,to reliably diagnose DR.Furthermore,thefindings reveal that the proposed model surpasses existing models in terms of accuracy,time efficiency,and precision rate with the lowest possible error rate.
基金the National Natural Science Foundation of China(No.62276210,82201148,61775180)the Natural Science Basic Research Program of Shaanxi Province(No.2022JM-380)+3 种基金the Shaanxi Province College Students'Innovation and Entrepreneurship Training Program(No.S202311664128X)the Natural Science Foundation of Zhejiang Province(No.LQ22H120002)the Medical Health Science and Technology Project of Zhejiang Province(No.2022RC069,2023KY1140)the Natural Science Foundation of Ningbo(No.2023J390)。
文摘Cataract is the leading cause of visual impairment globally.The scarcity and uneven distribution of ophthalmologists seriously hinder early visual impairment grading for cataract patients in the clin-ic.In this study,a deep learning-based automated grading system of visual impairment in cataract patients is proposed using a multi-scale efficient channel attention convolutional neural network(MECA_CNN).First,the efficient channel attention mechanism is applied in the MECA_CNN to extract multi-scale features of fundus images,which can effectively focus on lesion-related regions.Then,the asymmetric convolutional modules are embedded in the residual unit to reduce the infor-mation loss of fine-grained features in fundus images.In addition,the asymmetric loss function is applied to address the problem of a higher false-negative rate and weak generalization ability caused by the imbalanced dataset.A total of 7299 fundus images derived from two clinical centers are em-ployed to develop and evaluate the MECA_CNN for identifying mild visual impairment caused by cataract(MVICC),moderate to severe visual impairment caused by cataract(MSVICC),and nor-mal sample.The experimental results demonstrate that the MECA_CNN provides clinically meaning-ful performance for visual impairment grading in the internal test dataset:MVICC(accuracy,sensi-tivity,and specificity;91.3%,89.9%,and 92%),MSVICC(93.2%,78.5%,and 96.7%),and normal sample(98.1%,98.0%,and 98.1%).The comparable performance in the external test dataset is achieved,further verifying the effectiveness and generalizability of the MECA_CNN model.This study provides a deep learning-based practical system for the automated grading of visu-al impairment in cataract patients,facilitating the formulation of treatment strategies in a timely man-ner and improving patients’vision prognosis.
文摘AIM: To report the surgical result of pars plana vitrectomy(PPV) with air tamponade for rhegmatogenous retinal detachment(RRD) by ultra-widefield fundus imaging system. METHODS: Of 25 consecutive patients(25 eyes) with fresh primary RRD and causative retinal break and vitreous traction were presented. All the patients underwent PPV with air tamponade. Visual acuity(VA) was examined postoperatively and images were captured by ultrawidefield scanning laser ophthalmoscope system(Optos). RESULTS: Initial reattachment was achieved in 25 cases(100%). The air volume was 〉60% on the postoperative day(POD) 1. The ultra-widefield images showed that the retina was reattached in all air-filled eyes postoperatively. The retinal break and laser burns in the superior were detected in 22 of 25 eyes(88%). A missed retinal hole was found under intravitreal air bubble in 1 case(4%). The air volume was range from 40% to 60% on POD 3. A doublelayered image was seen in 25 of 25 eyes with intravitreal gas. Retinal breaks and laser burns around were seen in the intravitreal air. On POD 7, small bubble without effect was seen in 6 cases(24%) and bubble was completely disappeared in 4 cases(16%). Small oval bubble in the superior area was observed in 15 cases(60%). There were no missed and new retinal breaks and no retinal detachment in all cases on the POD 14 and 1 mo and last follow-up. Air disappeared completely on a mean of 9.84 d postoperatively. The mean final postoperative bestcorrected visual acuity(BCVA) was 0.35 log MAR. Mean final postoperative BCVA improved significantly relative to mean preoperative(P〈0.05). Final VA of 0.3 log MAR or better was seen in 13 eyes. CONCLUSION: PPV with air tamponade is an effective management for fresh RRD with superior retinal breaks. The ultra-widefield fundus imaging can detect postoperative retinal breaks in air-filled eyes. It would be a useful facility for follow-up after PPV with air tamponade. Facedown position and acquired visual rehabilitation may be shorten.
基金Supported by National Key Scientific Instrument and Equipment Development Project of China (No.2012YQ12008005)
文摘Adaptive optics scanning laser ophthalmoscopy(AOSLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics(AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods(fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques(adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography). Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors(cones and rods), fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.
基金Supported by the National High Technology Research and Development Program of China(863 Program)(No.2006AA020804)Fundamental Research Funds for the Central Universities(No.NJ20120007)+2 种基金Jiangsu Province Science and Technology Support Plan(No.BE2010652)Program Sponsored for Scientific Innovation Research of College Graduate in Jangsu Province(No.CXLX11_0218)Shanghai University Scientific Selection and Cultivation for Outstanding Young Teachers in Special Fund(No.ZZGCD15081)
文摘Diabetic retinopathy(DR) is one of the most important causes of visual impairment. Automatic recognition of DR lesions, like hard exudates(EXs), in retinal images can contribute to the diagnosis and screening of the disease. To achieve this goal, an automatically detecting approach based on improved FCM(IFCM) as well as support vector machines(SVM) was established and studied. Firstly, color fundus images were segmented by IFCM, and candidate regions of EXs were obtained. Then, the SVM classifier is confirmed with the optimal subset of features and judgments of these candidate regions, as a result hard exudates are detected from fundus images. Our database was composed of 126 images with variable color, brightness, and quality. 70 of them were used to train the SVM and the remaining 56 to assess the performance of the method. Using a lesion based criterion, we achieved a mean sensitivity of 94.65% and a mean positive predictive value of 97.25%. With an image-based criterion, our approach reached a 100% mean sensitivity, 96.43% mean specificity and 98.21% mean accuracy. Furthermore, the average time cost in processing an image is 4.56 s. The results suggest that the proposed method can efficiently detect EXs from color fundus images and it could be a diagnostic aid for ophthalmologists in the screening for DR.
文摘Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due to many ocular diseases. Among them, diabetic retinopathy is nowadays a chronic disease that attacks most of diabetic patients. Early detection through automatic screening programs reduces considerably expansion of the disease. Exudates are one of the earliest signs. This paper presents an automated method for exudates detection in digital retinal fundus image. The first step consists of image enhancement. It focuses on histogram expansion and median filter. The difference between filtered image and his inverse reduces noise and removes background while preserving features and patterns related to the exudates. The second step refers to blood vessel removal by using morphological operators. In the last step, we compute the result image with an algorithm based on Entropy Maximization Thresholding to obtain two segmented regions (optical disk and exudates) which were highlighted in the second step. Finally, according to size criteria, we eliminate the other regions obtain the regions of interest related to exudates. Evaluations were done with retinal fundus image DIARETDB1 database. DIARETDB1 gathers high-quality medical images which have been verified by experts. It consists of around 89 colour fundus images of which 84 contain at least mild non-proliferative signs of the diabetic retinopathy. This tool provides a unified framework for benchmarking the methods, but also points out clear deficiencies in the current practice in the method development. Comparing to other recent methods available in literature, we found that the proposed algorithm accomplished better result in terms of sensibility (94.27%) and specificity (97.63%).
基金National High Technology Research and Development Program of China(863 Program)(No.2006AA020804)Fundamental Research Funds for the Central Universities,China(No.NJ20120007)+2 种基金Jiangsu Province Science and Technology Support Plan,China(No.BE2010652)Program Sponsored for Scientific Innovation Research of College Graduate in Jangsu Province,China(No.CXLX11_0218)Shanghai University Scientific Selection and Cultivation for Outstanding Young Teachers in Special Fund,China(No.ZZGCD15081)。
文摘Optic disc(OD)detection is a main step while developing automated screening systems for diabetic retinopathy.We present a method to automatically locate and extract the OD in digital retinal fundus images.Based on the property that main blood vessels gather in OD,the method starts with Otsu thresholding segmentation to obtain candidate regions of OD.Consequently,the main blood vessels which are segmented in H channel of color fundus images in Hue saturation value(HSV)space.Finally,a weighted vessels’direction matched filter is proposed to roughly match the direction of the main blood vessels to get the OD center which is used to pick the true OD out from the candidate regions of OD.The proposed method was evaluated on a dataset containing 100 fundus images of both normal and diseased retinas and the accuracy reaches 98%.Furthermore,the average time cost in processing an image is 1.3 s.Results suggest that the approach is reliable,and can efficiently detect OD from fundus images.
基金supported by the National Natural Science Foundation of China under Grant No.61772118.
文摘The objective of the paper is to provide a general view for automatic cup to disc ratio(CDR)assessment in fundus images.As for the cause of blindness,glaucoma ranks as the second in ocular diseases.Vision loss caused by glaucoma cannot be reversed,but the loss may be avoided if screened in the early stage of glaucoma.Thus,early screening of glaucoma is very requisite to preserve vision and maintain quality of life.Optic nerve head(ONH)assessment is a useful and practical technique among current glaucoma screening methods.Vertical CDR as one of the clinical indicators for ONH assessment,has been well-used by clinicians and professionals for the analysis and diagnosis of glaucoma.The key for automatic calculation of vertical CDR in fundus images is the segmentation of optic cup(OC)and optic disc(OD).We take a brief description of methodologies about the OC and disc optic segmentation and comprehensively presented these methods as two aspects:hand-craft feature and deep learning feature.Sliding window regression,super-pixel level,image reconstruction,super-pixel level low-rank representation(LRR),deep learning methodologies for segmentation of OD and OC have been shown.It is hoped that this paper can provide guidance and bring inspiration to other researchers.Every mentioned method has its advantages and limitations.Appropriate method should be selected or explored according to the actual situation.For automatic glaucoma screening,CDR is just the reflection for a small part of the disc,while utilizing comprehensive factors or multimodal images is the promising future direction to furthermore enhance the performance.
基金Supported by the Swiss RetinAward 2017 from the Swiss Vitreo Retinal Group(SVRG)Bayer AG+2 种基金CSC(Chinese Scholarship Council)EAKAS(Swiss Excellence Scholarship)Natural Science Basic Research Program of Shaanxi,China(No.2020JM-400)。
文摘AIM:To compare choroidal neovascularization(CNV)lesion measurements obtained by in vivo imaging modalities,with whole mount histological preparations stained with isolectin GS-IB4,using a murine laser-induced CNV model.METHODS:B6 N.Cg-Tg(Csf1 r-EGFP)1 Hume/J heterozygous adult mice were subjected to laser-induced CNV and were monitored by fluorescein angiography(FA),multicolor(MC)fundus imaging and optical coherence tomography angiography(OCTA)at day 14 after CNV induction.Choroidalretinal pigment epithelium(RPE)whole mounts were prepared at the end of the experiment and were stained with isolectin GS-IB4.CNV areas were measured in all different imaging modalities at day 14 after CNV from three independent raters and were compared to choroidal-RPE whole mounts.Intraclass correlation coefficient(ICC)type 2(2-way random model)and its 95%confidence intervals(CI)were calculated to measure the correlation between different raters’measurements.Spearman’s rank correlation coefficient(Spearman’s r)was calculated for the comparison between FA,MC and OCTA data and histology data.RESULTS:FA(early and late)and MC correlates well with the CNV measurements ex vivo with FA having slightly better correlation than MC(FA early Spearman’s r=0.7642,FA late Spearman’s r=0.7097,and MC Spearman’s r=0.7418),while the interobser ver reliability was good for both techniques(FA early ICC=0.976,FA late ICC=0.964,and MC ICC=0.846).In contrast,OCTA showed a poor correlation with ex vivo measurements(Spearman’s r=0.05716)and high variability between different raters(ICC=0.603).CONCLUSION:This study suggests that FA and MC imaging could be used for the evaluation of CNV areas in vivo while caution must be taken and comparison studies should be performed when OCTA is employed as a CNV monitoring tool in small rodents.
基金Guangzhou Science and Technology Program(2012Y2-00017)Guangdong Provincial Science and Technology Projects (83036)Guangdong Provincial Natural Science Fund(82011010004440)
文摘Purpose:To explore a clear retinal imaging and output and enhance the development of retinopathy of prematurity (ROP) screening,which is safe and effective for ROP screening in premature infants. Methods:A computer-assisted binocular indirect ophthalmoscope imaging and output system was equipped with camera and image processing hardware and connected to computers. The process of fundus examination was videotaped (photograph) and output. Simulated eyes were utilized to debug video head and acquire stable and clear fundus images by binocular indirect ophthalmoscope for premature infants. Results:Fundus imaging output technique was sucessfully established. The common reasons of unclear imaging and corresponding solutions were summarized. This technique can capture and output stable and clear fundus images of premature infants. Conclusion: Assisted by hardware and software processing, a compute assisted binocular indirect ophthalmoscope imaging and output system was established,which can be used for screening, research, treatment and follow-up of ROP in premature babies to resolve the difficulty in obtaining clear fundus photograph.
基金Supported by Sichuan Province Scientific Research Project of Institutions of Higher Education (No.2017ZRQN-108)
文摘AIM: To evaluate the value of ultra-wide field(UWF) imaging in the management of traumatic retinopathy under the condition of corneal scar or fixed small pupil after complicated ocular trauma. METHODS: Twenty-eight patients(28 eyes) with complicated ocular trauma were enrolled in the study from June 2016 to May 2017, including 19 males and 9 females with age ranged from 11 to 64(43.42±12.62)y. All patients were treated with secondary vitrectomy after emergency operation for wound repair of open ocular trauma. Direct ophthalmoscopy and 45-degree fundus photography were taken at each time point of follow up for comparison of findings with UWF images. Routine eye examination including visual acuity, intraocular pressure, slit lamp examination were performed and analyzed as well.RESULTS: Among the 28 traumatized eyes, the positive rate for identification of traumatic retinopathed was 32.1%(9 cases), 14.9%(5 cases), and 85.7%(24 cases) with direct ophthalmoscopy, 45-degree fundus photography, and UWF imaging, respectively. The detective rate of UWF imaging under the condition of corneal scar or fixed small pupil was statistically greater than that of 45-degree fundus photography and direct ophthalmoscopy(Bonferroni correction, P〈0.001). UWF image was obtained in 19 eyes with opaque corneal scar, otherwise their fundus could not be seen by conventional methods. The additional findings of traumatic retinopathies by UWF imaging included periretinal membranes or pre-retinal proliferating strip, retinal holes, hemorrhage in the vitreous or sub-retinal space.CONCLUSION: UWF imaging is superior to traditional fundus photography in the evaluation of traumatic retinopathies under the condition of corneal scar or fixed small pupil after complicated ocular trauma.
基金Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through Project Number 959.
文摘A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been addressed recently,such as smartphone apps used for remote health monitoring and eye treatment.In recent years,advances in diagnosis,prediction,and clinical decision support using Artificial Intelligence(AI)in medicine and ophthalmology have been exponential.Due to privacy concerns,a lack of data makes applying artificial intelligence models in the medical field challenging.To address this issue,a federated learning framework named CDFL based on a VGG16 deep neural network model is proposed in this research.The study collects data from the Ocular Disease Intelligent Recognition(ODIR)database containing 5,000 patient records.The significant features are extracted and normalized using the min-max normalization technique.In the federated learning-based technique,the VGG16 model is trained on the dataset individually after receiving model updates from two clients.Before transferring the attributes to the global model,the suggested method trains the local model.The global model subsequently improves the technique after integrating the new parameters.Every client analyses the results in three rounds to decrease the over-fitting problem.The experimental result shows the effectiveness of the federated learning-based technique on a Deep Neural Network(DNN),reaching a 95.28%accuracy while also providing privacy to the patient’s data.The experiment demonstrated that the suggested federated learning model outperforms other traditional methods,achieving client 1 accuracy of 95.0%and client 2 accuracy of 96.0%.
基金Supported by Zhejiang Natural Science Foundation Project of China (No.LY18H120001)
文摘I am Dr.Ke Yao,from Eye Center,the Second Affiliated Hospital,School of Medicine,Zhejiang University,Hangzhou,China.I write to present three cases with metastatic choroidal tumor using an ultra-wide-field scanning laser ophthalmoscope.