Pathological myopia(PM)is a severe ocular disease leading to blindness.As a traditional noninvasive diagnostic method,fundus color photography(FCP)is widely used in detecting PM due to its highfidelity and precision.H...Pathological myopia(PM)is a severe ocular disease leading to blindness.As a traditional noninvasive diagnostic method,fundus color photography(FCP)is widely used in detecting PM due to its highfidelity and precision.However,manual examination of fundus photographs for PM is time-consuming and prone to high error rates.Existing automated detection technologies have yet to study the detailed classification in diagnosing different stages of PM lesions.In this paper,we proposed an intelligent system which utilized Resnet101 technology to multi-categorically diagnose PM by classifying FCPs with different stages of lesions.The system subdivided different stages of PM into eight subcategories,aiming to enhance the precision and efficiency of the diagnostic process.It achieved an average accuracy rate of 98.86%in detection of PM,with an area under the curve(AUC)of 98.96%.For the eight subcategories of PM,the detection accuracy reached 99.63%,with an AUC of 99.98%.Compared with other widely used multi-class models such as VGG16,Vision Transformer(VIT),EfficientNet,this system demonstrates higher accuracy and AUC.This artificial intelligence system is designed to be easily integrated into existing clinical diagnostic tools,providing an efficient solution for large-scale PM screening.展开更多
Breast cancer has become a killer of women's health nowadays.In order to exploit the potential representational capabilities of the models more comprehensively,we propose a multi-model fusion strategy.Specifically...Breast cancer has become a killer of women's health nowadays.In order to exploit the potential representational capabilities of the models more comprehensively,we propose a multi-model fusion strategy.Specifically,we combine two differently structured deep learning models,ResNet101 and Swin Transformer(SwinT),with the addition of the Convolutional Block Attention Module(CBAM)attention mechanism,which makes full use of SwinT's global context information modeling ability and ResNet101's local feature extraction ability,and additionally the cross entropy loss function is replaced by the focus loss function to solve the problem of unbalanced allocation of breast cancer data sets.The multi-classification recognition accuracies of the proposed fusion model under 40X,100X,200X and 400X BreakHis datasets are 97.50%,96.60%,96.30 and 96.10%,respectively.Compared with a single SwinT model and ResNet 101 model,the fusion model has higher accuracy and better generalization ability,which provides a more effective method for screening,diagnosis and pathological classification of female breast cancer.展开更多
Liver cancer is the second leading cause of cancer death worldwide.Early tumor detection may help identify suitable treatment and increase the survival rate.Medical imaging is a non-invasive tool that can help uncover...Liver cancer is the second leading cause of cancer death worldwide.Early tumor detection may help identify suitable treatment and increase the survival rate.Medical imaging is a non-invasive tool that can help uncover abnormalities in human organs.Magnetic Resonance Imaging(MRI),in particular,uses magnetic fields and radio waves to differentiate internal human organs tissue.However,the interpretation of medical images requires the subjective expertise of a radiologist and oncologist.Thus,building an automated diagnosis computer-based system can help specialists reduce incorrect diagnoses.This paper proposes a hybrid automated system to compare the performance of 3D features and 2D features in classifying magnetic resonance liver tumor images.This paper proposed two models;the first one employed the 3D features while the second exploited the 2D features.The first system uses 3D texture attributes,3D shape features,and 3D graphical deep descriptors beside an ensemble classifier to differentiate between four 3D tumor categories.On top of that,the proposed method is applied to 2D slices for comparison purposes.The proposed approach attained 100%accuracy in discriminating between all types of tumors,100%Area Under the Curve(AUC),100%sensitivity,and 100%specificity and precision as well in 3D liver tumors.On the other hand,the performance is lower in 2D classification.The maximum accuracy reached 96.4%for two classes and 92.1%for four classes.The top-class performance of the proposed system can be attributed to the exploitation of various types of feature selection methods besides utilizing the ReliefF features selection technique to choose the most relevant features associated with different classes.The novelty of this work appeared in building a highly accurate system under specific circumstances without any processing for the images and human input,besides comparing the performance between 2D and 3D classification.In the future,the presented work can be extended to be used in the huge dataset.Then,it can be a reliable,efficient Computer Aided Diagnosis(CAD)system employed in hospitals in rural areas.展开更多
This study aims to detect and prevent greening disease in citrus trees using a deep neural network.The process of collecting data on citrus greening disease is very difficult because the vector pests are too small.In ...This study aims to detect and prevent greening disease in citrus trees using a deep neural network.The process of collecting data on citrus greening disease is very difficult because the vector pests are too small.In this paper,since the amount of data collected for deep learning is insufficient,we intend to use the efficient feature extraction function of the neural network based on the Transformer algorithm.We want to use the Cascade Region-based Convolutional Neural Networks(Cascade R-CNN)Swin model,which is a mixture of the transformer model and Cascade R-CNN model to detect greening disease occurring in citrus.In this paper,we try to improve model safety by establishing a linear relationship between samples using Mixup and Cutmix algorithms,which are image processing-based data augmentation techniques.In addition,by using the ImageNet dataset,transfer learning,and stochastic weight averaging(SWA)methods,more accuracy can be obtained.This study compared the Faster Region-based Convolutional Neural Networks Residual Network101(Faster R-CNN ResNet101)model,Cascade Regionbased Convolutional Neural Networks Residual Network101(Cascade RCNN-ResNet101)model,and Cascade R-CNN Swin Model.As a result,the Faster R-CNN ResNet101 model came out as Average Precision(AP)(Intersection over Union(IoU)=0.5):88.2%,AP(IoU=0.75):62.8%,Recall:68.2%,and the Cascade R-CNN ResNet101 model was AP(IoU=0.5):91.5%,AP(IoU=0.75):67.2%,Recall:73.1%.Alternatively,the Cascade R-CNN Swin Model showed AP(IoU=0.5):94.9%,AP(IoU=0.75):79.8%and Recall:76.5%.Thus,the Cascade R-CNN Swin Model showed the best results for detecting citrus greening disease.展开更多
基金supported by the Natural National Science Foundation of China(62175156)the Science and technology innovation project of Shanghai Science and Technology Commission(22S31903000)Collaborative Innovation Project of Shanghai Institute of Technology(XTCX2022-27)。
文摘Pathological myopia(PM)is a severe ocular disease leading to blindness.As a traditional noninvasive diagnostic method,fundus color photography(FCP)is widely used in detecting PM due to its highfidelity and precision.However,manual examination of fundus photographs for PM is time-consuming and prone to high error rates.Existing automated detection technologies have yet to study the detailed classification in diagnosing different stages of PM lesions.In this paper,we proposed an intelligent system which utilized Resnet101 technology to multi-categorically diagnose PM by classifying FCPs with different stages of lesions.The system subdivided different stages of PM into eight subcategories,aiming to enhance the precision and efficiency of the diagnostic process.It achieved an average accuracy rate of 98.86%in detection of PM,with an area under the curve(AUC)of 98.96%.For the eight subcategories of PM,the detection accuracy reached 99.63%,with an AUC of 99.98%.Compared with other widely used multi-class models such as VGG16,Vision Transformer(VIT),EfficientNet,this system demonstrates higher accuracy and AUC.This artificial intelligence system is designed to be easily integrated into existing clinical diagnostic tools,providing an efficient solution for large-scale PM screening.
基金By the National Natural Science Foundation of China(NSFC)(No.61772358),the National Key R&D Program Funded Project(No.2021YFE0105500),and the Jiangsu University‘Blue Project’.
文摘Breast cancer has become a killer of women's health nowadays.In order to exploit the potential representational capabilities of the models more comprehensively,we propose a multi-model fusion strategy.Specifically,we combine two differently structured deep learning models,ResNet101 and Swin Transformer(SwinT),with the addition of the Convolutional Block Attention Module(CBAM)attention mechanism,which makes full use of SwinT's global context information modeling ability and ResNet101's local feature extraction ability,and additionally the cross entropy loss function is replaced by the focus loss function to solve the problem of unbalanced allocation of breast cancer data sets.The multi-classification recognition accuracies of the proposed fusion model under 40X,100X,200X and 400X BreakHis datasets are 97.50%,96.60%,96.30 and 96.10%,respectively.Compared with a single SwinT model and ResNet 101 model,the fusion model has higher accuracy and better generalization ability,which provides a more effective method for screening,diagnosis and pathological classification of female breast cancer.
文摘Liver cancer is the second leading cause of cancer death worldwide.Early tumor detection may help identify suitable treatment and increase the survival rate.Medical imaging is a non-invasive tool that can help uncover abnormalities in human organs.Magnetic Resonance Imaging(MRI),in particular,uses magnetic fields and radio waves to differentiate internal human organs tissue.However,the interpretation of medical images requires the subjective expertise of a radiologist and oncologist.Thus,building an automated diagnosis computer-based system can help specialists reduce incorrect diagnoses.This paper proposes a hybrid automated system to compare the performance of 3D features and 2D features in classifying magnetic resonance liver tumor images.This paper proposed two models;the first one employed the 3D features while the second exploited the 2D features.The first system uses 3D texture attributes,3D shape features,and 3D graphical deep descriptors beside an ensemble classifier to differentiate between four 3D tumor categories.On top of that,the proposed method is applied to 2D slices for comparison purposes.The proposed approach attained 100%accuracy in discriminating between all types of tumors,100%Area Under the Curve(AUC),100%sensitivity,and 100%specificity and precision as well in 3D liver tumors.On the other hand,the performance is lower in 2D classification.The maximum accuracy reached 96.4%for two classes and 92.1%for four classes.The top-class performance of the proposed system can be attributed to the exploitation of various types of feature selection methods besides utilizing the ReliefF features selection technique to choose the most relevant features associated with different classes.The novelty of this work appeared in building a highly accurate system under specific circumstances without any processing for the images and human input,besides comparing the performance between 2D and 3D classification.In the future,the presented work can be extended to be used in the huge dataset.Then,it can be a reliable,efficient Computer Aided Diagnosis(CAD)system employed in hospitals in rural areas.
基金This research was supported by the Honam University Research Fund,2021.
文摘This study aims to detect and prevent greening disease in citrus trees using a deep neural network.The process of collecting data on citrus greening disease is very difficult because the vector pests are too small.In this paper,since the amount of data collected for deep learning is insufficient,we intend to use the efficient feature extraction function of the neural network based on the Transformer algorithm.We want to use the Cascade Region-based Convolutional Neural Networks(Cascade R-CNN)Swin model,which is a mixture of the transformer model and Cascade R-CNN model to detect greening disease occurring in citrus.In this paper,we try to improve model safety by establishing a linear relationship between samples using Mixup and Cutmix algorithms,which are image processing-based data augmentation techniques.In addition,by using the ImageNet dataset,transfer learning,and stochastic weight averaging(SWA)methods,more accuracy can be obtained.This study compared the Faster Region-based Convolutional Neural Networks Residual Network101(Faster R-CNN ResNet101)model,Cascade Regionbased Convolutional Neural Networks Residual Network101(Cascade RCNN-ResNet101)model,and Cascade R-CNN Swin Model.As a result,the Faster R-CNN ResNet101 model came out as Average Precision(AP)(Intersection over Union(IoU)=0.5):88.2%,AP(IoU=0.75):62.8%,Recall:68.2%,and the Cascade R-CNN ResNet101 model was AP(IoU=0.5):91.5%,AP(IoU=0.75):67.2%,Recall:73.1%.Alternatively,the Cascade R-CNN Swin Model showed AP(IoU=0.5):94.9%,AP(IoU=0.75):79.8%and Recall:76.5%.Thus,the Cascade R-CNN Swin Model showed the best results for detecting citrus greening disease.