Diabetic retinopathy is a critical eye condition that,if not treated,can lead to vision loss.Traditional methods of diagnosing and treating the disease are time-consuming and expensive.However,machine learning and dee...Diabetic retinopathy is a critical eye condition that,if not treated,can lead to vision loss.Traditional methods of diagnosing and treating the disease are time-consuming and expensive.However,machine learning and deep transfer learning(DTL)techniques have shown promise in medical applications,including detecting,classifying,and segmenting diabetic retinopathy.These advanced techniques offer higher accuracy and performance.ComputerAided Diagnosis(CAD)is crucial in speeding up classification and providing accurate disease diagnoses.Overall,these technological advancements hold great potential for improving the management of diabetic retinopathy.The study’s objective was to differentiate between different classes of diabetes and verify the model’s capability to distinguish between these classes.The robustness of the model was evaluated using other metrics such as accuracy(ACC),precision(PRE),recall(REC),and area under the curve(AUC).In this particular study,the researchers utilized data cleansing techniques,transfer learning(TL),and convolutional neural network(CNN)methods to effectively identify and categorize the various diseases associated with diabetic retinopathy(DR).They employed the VGG-16CNN model,incorporating intelligent parameters that enhanced its robustness.The outcomes surpassed the results obtained by the auto enhancement(AE)filter,which had an ACC of over 98%.The manuscript provides visual aids such as graphs,tables,and techniques and frameworks to enhance understanding.This study highlights the significance of optimized deep TL in improving the metrics of the classification of the four separate classes of DR.The manuscript emphasizes the importance of using the VGG16CNN classification technique in this context.展开更多
由于花卉种类繁多,花卉的识别需要人们掌握深厚的植物学知识和长期观察的经验总结,而利用深度学习可实现花卉种类的智能识别。首先,通过迁移学习在视觉几何群网络(Visual Geometry Group Network,VGG-16)算法的基础上进行改进,实现花卉...由于花卉种类繁多,花卉的识别需要人们掌握深厚的植物学知识和长期观察的经验总结,而利用深度学习可实现花卉种类的智能识别。首先,通过迁移学习在视觉几何群网络(Visual Geometry Group Network,VGG-16)算法的基础上进行改进,实现花卉的识别;其次,将训练好的模型进行封装,上传至云服务器;最后,在云服务器上进行识别,通过超文本传输协议(Hyper Text Transfer Protocol,HTTP)与微信小程序进行通信,实现了拍照上传即可识别花卉种类和了解花卉特性的小程序设计。展开更多
农田害虫降低了农作物的产量和质量,如何有效区分和治理农田害虫成为首要解决的问题。文章紧抓农田环境需求和农民对农作物的产量需求不匹配的痛点,基于卷积神经网络技术识别农田害虫,为农业提供有效的识别方式。采用MobileNetV1、残差...农田害虫降低了农作物的产量和质量,如何有效区分和治理农田害虫成为首要解决的问题。文章紧抓农田环境需求和农民对农作物的产量需求不匹配的痛点,基于卷积神经网络技术识别农田害虫,为农业提供有效的识别方式。采用MobileNetV1、残差神经网络(Residual Network,ResNet)50、视觉几何群网络(Visual Geometry Group Network,VGG)16以及微调预训练模型VGG16共4种网络模型二分类农田害虫图片集。由于样本数据量较少,为防止出现过拟合,使用了数据增强技术,即通过现有训练图片生成更多的训练图片,从而提高泛化能力。实验表明,4种网络模型的准确率分别为88.63%、91.73%、86.49%和90.13%,在农田害虫识别中均具有较好的实际应用效果。展开更多
Since the outbreak of Coronavirus Disease 2019(COVID-19),people are recommended to wear facial masks to limit the spread of the virus.Under the circumstances,traditional face recognition technologies cannot achieve sa...Since the outbreak of Coronavirus Disease 2019(COVID-19),people are recommended to wear facial masks to limit the spread of the virus.Under the circumstances,traditional face recognition technologies cannot achieve satisfactory results.In this paper,we propose a face recognition algorithm that combines the traditional features and deep features of masked faces.For traditional features,we extract Local Binary Pattern(LBP),Scale-Invariant Feature Transform(SIFT)and Histogram of Oriented Gradient(HOG)features from the periocular region,and use the Support Vector Machines(SVM)classifier to perform personal identification.We also propose an improved Convolutional Neural Network(CNN)model Angular Visual Geometry Group Network(A-VGG)to learn deep features.Then we use the decision-level fusion to combine the four features.Comprehensive experiments were carried out on databases of real masked faces and simulated masked faces,including frontal and side faces taken at different angles.Images with motion blur were also tested to evaluate the robustness of the algorithm.Besides,the experiment of matching a masked face with the corresponding full face is accomplished.The experimental results show that the proposed algorithm has state-of-the-art performance in masked face recognition,and the periocular region has rich biological features and high discrimination.展开更多
In recent times,the use of artificial intelligence(AI)in agriculture has become the most important.The technology adoption in agriculture if creatively approached.Controlling on the diseased leaves during the growing ...In recent times,the use of artificial intelligence(AI)in agriculture has become the most important.The technology adoption in agriculture if creatively approached.Controlling on the diseased leaves during the growing stages of crops is a crucial step.The disease detection,classification,and analysis of diseased leaves at an early stage,as well as possible solutions,are always helpful in agricultural progress.The disease detection and classification of different crops,especially tomatoes and grapes,is a major emphasis of our proposed research.The important objective is to forecast the sort of illness that would affect grapes and tomato leaves at an early stage.The Convolutional Neural Network(CNN)methods are used for detecting Multi-Crops Leaf Disease(MCLD).The features extraction of images using a deep learning-based model classified the sick and healthy leaves.The CNN based Visual Geometry Group(VGG)model is used for improved performance measures.The crops leaves images dataset is considered for training and testing the model.The performance measure parameters,i.e.,accuracy,sensitivity,specificity precision,recall and F1-score were calculated and monitored.The main objective of research with the proposed model is to make on-going improvements in the performance.The designed model classifies disease-affected leaves with greater accuracy.In the experiment proposed research has achieved an accuracy of 98.40%of grapes and 95.71%of tomatoes.The proposed research directly supports increasing food production in agriculture.展开更多
Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globall...Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globally.Fundus image segmentation depends on the optic disc(OD)and optic cup(OC).This paper proposes a computational model to segment and classify retinal fundus images for glaucoma detection.Different data augmentation techniques were applied to prevent overfitting while employing several data pre-processing approaches to improve the image quality and achieve high accuracy.The segmentation models are based on an attention U-Net with three separate convolutional neural networks(CNNs)backbones:Inception-v3,visual geometry group 19(VGG19),and residual neural network 50(ResNet50).The classification models also employ a modified version of the above three CNN architectures.Using the RIM-ONE dataset,the attention U-Net with the ResNet50 model as the encoder backbone,achieved the best accuracy of 99.58%in segmenting OD.The Inception-v3 model had the highest accuracy of 98.79%for glaucoma classification among the evaluated segmentation,followed by the modified classification architectures.展开更多
Identification of abnormal cervical cells is a significant problem in computer-aided diagnosis of cervical cancer.In this study,we develop an artificial intelligence(AI)system,named CytoBrain,to automatically screen a...Identification of abnormal cervical cells is a significant problem in computer-aided diagnosis of cervical cancer.In this study,we develop an artificial intelligence(AI)system,named CytoBrain,to automatically screen abnormal cervical cells to help facilitate the subsequent clinical diagnosis of the subjects.The system consists of three main modules:1)the cervical cell segmentation module which is responsible for efficiently extracting cell images in a whole slide image(WSI);2)the cell classification module based on a compact visual geometry group(VGG)network called CompactVGG which is the key part of the system and is used for building the cell classifier;3)the visualized human-aided diagnosis module which can automatically diagnose a WSI based on the classification results of cells in it,and provide two visual display modes for users to review and modify.For model construction and validation,we have developed a dataset containing 198952 cervical cell images(60238 positive,25001 negative,and 113713 junk)from samples of 2312 adult women.Since CompactVGG is the key part of CytoBrain,we conduct comparison experiments to evaluate its time and classification performance on our developed dataset and two public datasets separately.The comparison results with VGG11,the most efficient one in the family of VGG networks,show that CompactVGG takes less time for either model training or sample testing.Compared with three sophisticated deep learning models,CompactVGG consistently achieves the best classification performance.The results illustrate that the system based on CompactVGG is efficient and effective and can support for large-scale cervical cancer screening.展开更多
文摘Diabetic retinopathy is a critical eye condition that,if not treated,can lead to vision loss.Traditional methods of diagnosing and treating the disease are time-consuming and expensive.However,machine learning and deep transfer learning(DTL)techniques have shown promise in medical applications,including detecting,classifying,and segmenting diabetic retinopathy.These advanced techniques offer higher accuracy and performance.ComputerAided Diagnosis(CAD)is crucial in speeding up classification and providing accurate disease diagnoses.Overall,these technological advancements hold great potential for improving the management of diabetic retinopathy.The study’s objective was to differentiate between different classes of diabetes and verify the model’s capability to distinguish between these classes.The robustness of the model was evaluated using other metrics such as accuracy(ACC),precision(PRE),recall(REC),and area under the curve(AUC).In this particular study,the researchers utilized data cleansing techniques,transfer learning(TL),and convolutional neural network(CNN)methods to effectively identify and categorize the various diseases associated with diabetic retinopathy(DR).They employed the VGG-16CNN model,incorporating intelligent parameters that enhanced its robustness.The outcomes surpassed the results obtained by the auto enhancement(AE)filter,which had an ACC of over 98%.The manuscript provides visual aids such as graphs,tables,and techniques and frameworks to enhance understanding.This study highlights the significance of optimized deep TL in improving the metrics of the classification of the four separate classes of DR.The manuscript emphasizes the importance of using the VGG16CNN classification technique in this context.
文摘由于花卉种类繁多,花卉的识别需要人们掌握深厚的植物学知识和长期观察的经验总结,而利用深度学习可实现花卉种类的智能识别。首先,通过迁移学习在视觉几何群网络(Visual Geometry Group Network,VGG-16)算法的基础上进行改进,实现花卉的识别;其次,将训练好的模型进行封装,上传至云服务器;最后,在云服务器上进行识别,通过超文本传输协议(Hyper Text Transfer Protocol,HTTP)与微信小程序进行通信,实现了拍照上传即可识别花卉种类和了解花卉特性的小程序设计。
文摘农田害虫降低了农作物的产量和质量,如何有效区分和治理农田害虫成为首要解决的问题。文章紧抓农田环境需求和农民对农作物的产量需求不匹配的痛点,基于卷积神经网络技术识别农田害虫,为农业提供有效的识别方式。采用MobileNetV1、残差神经网络(Residual Network,ResNet)50、视觉几何群网络(Visual Geometry Group Network,VGG)16以及微调预训练模型VGG16共4种网络模型二分类农田害虫图片集。由于样本数据量较少,为防止出现过拟合,使用了数据增强技术,即通过现有训练图片生成更多的训练图片,从而提高泛化能力。实验表明,4种网络模型的准确率分别为88.63%、91.73%、86.49%和90.13%,在农田害虫识别中均具有较好的实际应用效果。
基金Supported by the Postgraduate Research and Practice Innovation Program of Nanjing University of Aeronautics and Astronautics(XCXJH20220318)。
文摘Since the outbreak of Coronavirus Disease 2019(COVID-19),people are recommended to wear facial masks to limit the spread of the virus.Under the circumstances,traditional face recognition technologies cannot achieve satisfactory results.In this paper,we propose a face recognition algorithm that combines the traditional features and deep features of masked faces.For traditional features,we extract Local Binary Pattern(LBP),Scale-Invariant Feature Transform(SIFT)and Histogram of Oriented Gradient(HOG)features from the periocular region,and use the Support Vector Machines(SVM)classifier to perform personal identification.We also propose an improved Convolutional Neural Network(CNN)model Angular Visual Geometry Group Network(A-VGG)to learn deep features.Then we use the decision-level fusion to combine the four features.Comprehensive experiments were carried out on databases of real masked faces and simulated masked faces,including frontal and side faces taken at different angles.Images with motion blur were also tested to evaluate the robustness of the algorithm.Besides,the experiment of matching a masked face with the corresponding full face is accomplished.The experimental results show that the proposed algorithm has state-of-the-art performance in masked face recognition,and the periocular region has rich biological features and high discrimination.
文摘In recent times,the use of artificial intelligence(AI)in agriculture has become the most important.The technology adoption in agriculture if creatively approached.Controlling on the diseased leaves during the growing stages of crops is a crucial step.The disease detection,classification,and analysis of diseased leaves at an early stage,as well as possible solutions,are always helpful in agricultural progress.The disease detection and classification of different crops,especially tomatoes and grapes,is a major emphasis of our proposed research.The important objective is to forecast the sort of illness that would affect grapes and tomato leaves at an early stage.The Convolutional Neural Network(CNN)methods are used for detecting Multi-Crops Leaf Disease(MCLD).The features extraction of images using a deep learning-based model classified the sick and healthy leaves.The CNN based Visual Geometry Group(VGG)model is used for improved performance measures.The crops leaves images dataset is considered for training and testing the model.The performance measure parameters,i.e.,accuracy,sensitivity,specificity precision,recall and F1-score were calculated and monitored.The main objective of research with the proposed model is to make on-going improvements in the performance.The designed model classifies disease-affected leaves with greater accuracy.In the experiment proposed research has achieved an accuracy of 98.40%of grapes and 95.71%of tomatoes.The proposed research directly supports increasing food production in agriculture.
文摘Glaucoma is a prevalent cause of blindness worldwide.If not treated promptly,it can cause vision and quality of life to deteriorate.According to statistics,glaucoma affects approximately 65 million individuals globally.Fundus image segmentation depends on the optic disc(OD)and optic cup(OC).This paper proposes a computational model to segment and classify retinal fundus images for glaucoma detection.Different data augmentation techniques were applied to prevent overfitting while employing several data pre-processing approaches to improve the image quality and achieve high accuracy.The segmentation models are based on an attention U-Net with three separate convolutional neural networks(CNNs)backbones:Inception-v3,visual geometry group 19(VGG19),and residual neural network 50(ResNet50).The classification models also employ a modified version of the above three CNN architectures.Using the RIM-ONE dataset,the attention U-Net with the ResNet50 model as the encoder backbone,achieved the best accuracy of 99.58%in segmenting OD.The Inception-v3 model had the highest accuracy of 98.79%for glaucoma classification among the evaluated segmentation,followed by the modified classification architectures.
基金This work was supported by the Major Projects of Technological Innovation in Hubei Province of China under Grant Nos.2019AEA170 and 2019ACA161the Frontier Projects of Wuhan for Application Foundation under Grant No.2019010701011381the Translational Medicine and Interdisciplinary Research Joint Fund of Zhongnan Hospital of Wuhan University under Grant No.ZNJC201919.
文摘Identification of abnormal cervical cells is a significant problem in computer-aided diagnosis of cervical cancer.In this study,we develop an artificial intelligence(AI)system,named CytoBrain,to automatically screen abnormal cervical cells to help facilitate the subsequent clinical diagnosis of the subjects.The system consists of three main modules:1)the cervical cell segmentation module which is responsible for efficiently extracting cell images in a whole slide image(WSI);2)the cell classification module based on a compact visual geometry group(VGG)network called CompactVGG which is the key part of the system and is used for building the cell classifier;3)the visualized human-aided diagnosis module which can automatically diagnose a WSI based on the classification results of cells in it,and provide two visual display modes for users to review and modify.For model construction and validation,we have developed a dataset containing 198952 cervical cell images(60238 positive,25001 negative,and 113713 junk)from samples of 2312 adult women.Since CompactVGG is the key part of CytoBrain,we conduct comparison experiments to evaluate its time and classification performance on our developed dataset and two public datasets separately.The comparison results with VGG11,the most efficient one in the family of VGG networks,show that CompactVGG takes less time for either model training or sample testing.Compared with three sophisticated deep learning models,CompactVGG consistently achieves the best classification performance.The results illustrate that the system based on CompactVGG is efficient and effective and can support for large-scale cervical cancer screening.