摘要
针对视网膜图片质量差异性大,质量分级模型泛化性能不足的问题,提出了一种基于锐度感知最小化的多色域双级融合算法,用于视网膜图片质量的分级预测。首先,采用ResNeSt网络对RGB(red, green, blue)、HSV(色相hue、饱和度saturation、亮度value)和LAB(L表示像素的亮度、A表示从红色到绿色的范围、B表示从黄色到蓝色的范围)3种色域空间进行特征提取。其次,使用网络的特征输出与预测输出进行双级融合,丰富视网膜图片的特征表示。然后,使用锐度感知最小化对视网膜图片质量分级模型进行优化,提高质量分级模型的泛化性能。最后,在EyeQ数据集上进行实验仿真,其准确率为87.35%、精确度为85.87%、敏感度为85.07%、F值为85.44%,所提算法能有效区分视网膜图片的质量等级并提高模型的泛化性能。
In view of the large difference in retinal image quality and the insufficient generalization performance of the quality classification model,a multi-color gamut bi-level fusion algorithm based on sharpness perception minimization was proposed for the graded prediction of retinal image quality.Firstly,the ResNeSt network was used to extract features from three color gamut spaces,RGB(red,green,blue),HSV(hue,saturation,value)and LAB(L represents the brightness of the pixel,A represents the range from red to green,and B represents the range from yellow to blue)Second,the bi-level fusion was composed of feature output and prediction output of the network and enriched the feature representation of retinal images.Then,the retinal image quality grading model was optimized using sharpness-aware minimization to improve its generalization performance.Finally,the experimental simulation was carried out on the EyeQ dataset,and the accuracy rate is 87.35%,the precision is 85.87%,the sensitivity is 85.07%,and the F value is 85.44%.The quality level of retinal images can be effectively distinguished by the proposed algorithm,and the generalization performance of the model can also be improved.
作者
梁礼明
雷坤
詹涛
彭仁杰
谭卢敏
LIANG Li-ming;LEI Kun;ZHAN Tao;PENG Ren-jie;TAN Lu-min(School of Electrical Engineering and Automation,Jiangxi University of Science and Technology,Ganzhou 341000,China;School of Applied Science,Jiangxi University of Science and Technology,Ganzhou 341000,China)
出处
《科学技术与工程》
北大核心
2022年第32期14289-14297,共9页
Science Technology and Engineering
基金
国家自然科学基金(51365017,6146301)
江西省自然科学基金(20192BAB205084)
江西省教育厅科学技术研究重点项目(GJJ170491)。