The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place i...The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially.This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network(F-RCNN).The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions.Furthermore,image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation.The permanent changes in climate are of serious concern.The leading causes beyond these destructive variations are ozone layer depletion,greenhouse gas release,deforestation,pollution,water resources contamination,and UV radiation.This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues,e.g.,skin cancer,damage to marine life,crops damage,and impacts on living being’s immune systems.We have tried to classify the ozone images dataset into two major classes,depleted and non-depleted regions,to extract the required persuading features through F-RCNN.Furthermore,CNN has been used for feature extraction in the existing literature,and those extricated diverse RoIs are passed on to the CNN for grouping purposes.It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results.The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91%to 93%in identifying climate variation through ozone concentration classification,whether the region in the image under consideration is depleted or non-depleted.Our proposed model presented 93%accuracy,and it outperforms the prevailing techniques.展开更多
Alzheimer’s disease(AD)is a neurological disorder that predominantly affects the brain.In the coming years,it is expected to spread rapidly,with limited progress in diagnostic techniques.Various machine learning(ML)a...Alzheimer’s disease(AD)is a neurological disorder that predominantly affects the brain.In the coming years,it is expected to spread rapidly,with limited progress in diagnostic techniques.Various machine learning(ML)and artificial intelligence(AI)algorithms have been employed to detect AD using single-modality data.However,recent developments in ML have enabled the application of these methods to multiple data sources and input modalities for AD prediction.In this study,we developed a framework that utilizes multimodal data(tabular data,magnetic resonance imaging(MRI)images,and genetic information)to classify AD.As part of the pre-processing phase,we generated a knowledge graph from the tabular data and MRI images.We employed graph neural networks for knowledge graph creation,and region-based convolutional neural network approach for image-to-knowledge graph generation.Additionally,we integrated various explainable AI(XAI)techniques to interpret and elucidate the prediction outcomes derived from multimodal data.Layer-wise relevance propagation was used to explain the layer-wise outcomes in the MRI images.We also incorporated submodular pick local interpretable model-agnostic explanations to interpret the decision-making process based on the tabular data provided.Genetic expression values play a crucial role in AD analysis.We used a graphical gene tree to identify genes associated with the disease.Moreover,a dashboard was designed to display XAI outcomes,enabling experts and medical professionals to easily comprehend the predic-tion results.展开更多
An otoscope is traditionally used to examine the eardrum and ear canal.A diagnosis of otitis media(OM)relies on the experience of clinicians.If an examiner lacks experience,the examination may be difficult and time-co...An otoscope is traditionally used to examine the eardrum and ear canal.A diagnosis of otitis media(OM)relies on the experience of clinicians.If an examiner lacks experience,the examination may be difficult and time-consuming.This paper presents an ear disease classification method using middle ear images based on a convolutional neural network(CNN).Especially the segmentation and classification networks are used to classify an otoscopic image into six classes:normal,acute otitis media(AOM),otitis media with effusion(OME),chronic otitis media(COM),congenital cholesteatoma(CC)and traumatic perforations(TMPs).The Mask R-CNN is utilized for the segmentation network to extract the region of interest(ROI)from otoscopic images.The extracted ROIs are used as guiding features for the classification.The classification is based on transfer learning with an ensemble of two CNN classifiers:EfficientNetB0 and Inception-V3.The proposed model was trained with a 5-fold cross-validation technique.The proposed method was evaluated and achieved a classification accuracy of 97.29%.展开更多
针对小目标水漂垃圾形态多变、分辨率低且信息有限,导致检测效果不理想的问题,提出一种改进的Faster-RCNN(Faster Regions with Convolutional Neural Network)水漂垃圾检测算法MP-Faster-RCNN(Faster-RCNN with Multi-scale feature an...针对小目标水漂垃圾形态多变、分辨率低且信息有限,导致检测效果不理想的问题,提出一种改进的Faster-RCNN(Faster Regions with Convolutional Neural Network)水漂垃圾检测算法MP-Faster-RCNN(Faster-RCNN with Multi-scale feature and Polarized self-attention)。首先,建立黄河兰州段小目标水漂垃圾数据集,将空洞卷积结合ResNet-50代替原来的VGG-16(Visual Geometry Group 16)作为主干特征提取网络,扩大感受野以提取更多小目标特征;其次,在区域生成网络(RPN)利用多尺度特征,设置3×3和1×1的两层卷积,补偿单一滑动窗口造成的特征丢失;最后,在RPN前加入极化自注意力,进一步利用多尺度和通道特征提取更细粒度的多尺度空间信息和通道间依赖关系,生成具有全局特征的特征图,实现更精确的目标框定位。实验结果表明,MP-Faster-RCNN能有效提高水漂垃圾检测精度,与原始Faster-RCNN相比,平均精度均值(mAP)提高了6.37个百分点,模型大小从521 MB降到了108 MB,且在同一训练批次下收敛更快。展开更多
Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique sys...Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique system,allowing this system to read computed tomography(CT)images correctly and make diagnosis of pancreatic cancer faster.Methods:The establishment of the artificial intelligence(AI)system for pancreatic cancer diagnosis based on sequential contrastenhanced CT images were composed of two processes:training and verification.During training process,our study used all 4385 CT images from 238 pancreatic cancer patients in the database as the training data set.Additionally,we used VGG16,which was pretrained in ImageNet and contained 13 convolutional layers and three fully connected layers,to initialize the feature extraction network.In the verification experiment,we used sequential clinical CT images from 238 pancreatic cancer patients as our experimental data and input these data into the faster region-based convolution network(Faster R-CNN)model that had completed training.Totally,1699 images from 100 pancreatic cancer patients were included for clinical verification.Results:A total of 338 patients with pancreatic cancer were included in the study.The clinical characteristics(sex,age,tumor location,differentiation grade,and tumor-node-metastasis stage)between the two training and verification groups were insignificant.The mean average precision was 0.7664,indicating a good training ejffect of the Faster R-CNN.Sequential contrastenhanced CT images of 100 pancreatic cancer patients were used for clinical verification.The area under the receiver operating characteristic curve calculated according to the trapezoidal rule was 0.9632.It took approximately 0.2 s for the Faster R-CNN AI to automatically process one CT image,which is much faster than the time required for diagnosis by an imaging specialist.Conclusions:Faster R-CNN AI is an effective and objective method with high accuracy for the diagnosis of pancreatic cancer.展开更多
Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to e...Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to establish a Faster region-based convolutional neural network(RCNN)model for the accurate differential diagnosis of PCCCL and CHCC.Methods:In this study,we collected the data of 62 patients with PCCCL and 1079 patients with CHCC in Beijing YouAn Hospital from June 2012 to May 2020.A total of 109 patients with CHCC and 42 patients with PCCCL were randomly divided into the training validation set and the test set in a ratio of 4:1.The Faster RCNN was used for deep learning of patients’data in the training validation set,and established a convolutional neural network model to distinguish PCCCL and CHCC.The accuracy,average precision,and the recall of the model for diagnosing PCCCL and CHCC were used to evaluate the detection performance of the Faster RCNN algorithm.Results:A total of 4392 images of 121 patients(1032 images of 33 patients with PCCCL and 3360 images of 88 patients with CHCC)were uesd in test set for deep learning and establishing the model,and 1072 images of 30 patients(320 images of nine patients with PCCCL and 752 images of 21 patients with CHCC)were used to test the model.The accuracy of the model for accurately diagnosing PCCCL and CHCC was 0.962(95%confidence interval[CI]:0.931-0.992).The average precision of the model for diagnosing PCCCL was 0.908(95%CI:0.823-0.993)and that for diagnosing CHCC was 0.907(95%CI:0.823-0.993).The recall of the model for diagnosing PCCCL was 0.951(95%CI:0.916-0.985)and that for diagnosing CHCC was 0.960(95%CI:0.854-0.962).The time to make a diagnosis using the model took an average of 4 s for each patient.Conclusion:The Faster RCNN model can accurately distinguish PCCCL and CHCC.This model could be important for clinicians to make appropriate treatment plans for patients with PCCCL or CHCC.展开更多
文摘The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially.This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network(F-RCNN).The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions.Furthermore,image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation.The permanent changes in climate are of serious concern.The leading causes beyond these destructive variations are ozone layer depletion,greenhouse gas release,deforestation,pollution,water resources contamination,and UV radiation.This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues,e.g.,skin cancer,damage to marine life,crops damage,and impacts on living being’s immune systems.We have tried to classify the ozone images dataset into two major classes,depleted and non-depleted regions,to extract the required persuading features through F-RCNN.Furthermore,CNN has been used for feature extraction in the existing literature,and those extricated diverse RoIs are passed on to the CNN for grouping purposes.It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results.The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91%to 93%in identifying climate variation through ozone concentration classification,whether the region in the image under consideration is depleted or non-depleted.Our proposed model presented 93%accuracy,and it outperforms the prevailing techniques.
文摘Alzheimer’s disease(AD)is a neurological disorder that predominantly affects the brain.In the coming years,it is expected to spread rapidly,with limited progress in diagnostic techniques.Various machine learning(ML)and artificial intelligence(AI)algorithms have been employed to detect AD using single-modality data.However,recent developments in ML have enabled the application of these methods to multiple data sources and input modalities for AD prediction.In this study,we developed a framework that utilizes multimodal data(tabular data,magnetic resonance imaging(MRI)images,and genetic information)to classify AD.As part of the pre-processing phase,we generated a knowledge graph from the tabular data and MRI images.We employed graph neural networks for knowledge graph creation,and region-based convolutional neural network approach for image-to-knowledge graph generation.Additionally,we integrated various explainable AI(XAI)techniques to interpret and elucidate the prediction outcomes derived from multimodal data.Layer-wise relevance propagation was used to explain the layer-wise outcomes in the MRI images.We also incorporated submodular pick local interpretable model-agnostic explanations to interpret the decision-making process based on the tabular data provided.Genetic expression values play a crucial role in AD analysis.We used a graphical gene tree to identify genes associated with the disease.Moreover,a dashboard was designed to display XAI outcomes,enabling experts and medical professionals to easily comprehend the predic-tion results.
基金This study was supported by a Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT&Future Planning NRF-2020R1A2C1014829the Soonchunhyang University Research Fund.
文摘An otoscope is traditionally used to examine the eardrum and ear canal.A diagnosis of otitis media(OM)relies on the experience of clinicians.If an examiner lacks experience,the examination may be difficult and time-consuming.This paper presents an ear disease classification method using middle ear images based on a convolutional neural network(CNN).Especially the segmentation and classification networks are used to classify an otoscopic image into six classes:normal,acute otitis media(AOM),otitis media with effusion(OME),chronic otitis media(COM),congenital cholesteatoma(CC)and traumatic perforations(TMPs).The Mask R-CNN is utilized for the segmentation network to extract the region of interest(ROI)from otoscopic images.The extracted ROIs are used as guiding features for the classification.The classification is based on transfer learning with an ensemble of two CNN classifiers:EfficientNetB0 and Inception-V3.The proposed model was trained with a 5-fold cross-validation technique.The proposed method was evaluated and achieved a classification accuracy of 97.29%.
文摘针对小目标水漂垃圾形态多变、分辨率低且信息有限,导致检测效果不理想的问题,提出一种改进的Faster-RCNN(Faster Regions with Convolutional Neural Network)水漂垃圾检测算法MP-Faster-RCNN(Faster-RCNN with Multi-scale feature and Polarized self-attention)。首先,建立黄河兰州段小目标水漂垃圾数据集,将空洞卷积结合ResNet-50代替原来的VGG-16(Visual Geometry Group 16)作为主干特征提取网络,扩大感受野以提取更多小目标特征;其次,在区域生成网络(RPN)利用多尺度特征,设置3×3和1×1的两层卷积,补偿单一滑动窗口造成的特征丢失;最后,在RPN前加入极化自注意力,进一步利用多尺度和通道特征提取更细粒度的多尺度空间信息和通道间依赖关系,生成具有全局特征的特征图,实现更精确的目标框定位。实验结果表明,MP-Faster-RCNN能有效提高水漂垃圾检测精度,与原始Faster-RCNN相比,平均精度均值(mAP)提高了6.37个百分点,模型大小从521 MB降到了108 MB,且在同一训练批次下收敛更快。
基金This work was supported by grants from the National Natural Science Foundation of China(No.81802888)the Key Research and Development Project of Shandong Province(No.2018GSF118206 and No.2018GSF118088).
文摘Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique system,allowing this system to read computed tomography(CT)images correctly and make diagnosis of pancreatic cancer faster.Methods:The establishment of the artificial intelligence(AI)system for pancreatic cancer diagnosis based on sequential contrastenhanced CT images were composed of two processes:training and verification.During training process,our study used all 4385 CT images from 238 pancreatic cancer patients in the database as the training data set.Additionally,we used VGG16,which was pretrained in ImageNet and contained 13 convolutional layers and three fully connected layers,to initialize the feature extraction network.In the verification experiment,we used sequential clinical CT images from 238 pancreatic cancer patients as our experimental data and input these data into the faster region-based convolution network(Faster R-CNN)model that had completed training.Totally,1699 images from 100 pancreatic cancer patients were included for clinical verification.Results:A total of 338 patients with pancreatic cancer were included in the study.The clinical characteristics(sex,age,tumor location,differentiation grade,and tumor-node-metastasis stage)between the two training and verification groups were insignificant.The mean average precision was 0.7664,indicating a good training ejffect of the Faster R-CNN.Sequential contrastenhanced CT images of 100 pancreatic cancer patients were used for clinical verification.The area under the receiver operating characteristic curve calculated according to the trapezoidal rule was 0.9632.It took approximately 0.2 s for the Faster R-CNN AI to automatically process one CT image,which is much faster than the time required for diagnosis by an imaging specialist.Conclusions:Faster R-CNN AI is an effective and objective method with high accuracy for the diagnosis of pancreatic cancer.
文摘Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to establish a Faster region-based convolutional neural network(RCNN)model for the accurate differential diagnosis of PCCCL and CHCC.Methods:In this study,we collected the data of 62 patients with PCCCL and 1079 patients with CHCC in Beijing YouAn Hospital from June 2012 to May 2020.A total of 109 patients with CHCC and 42 patients with PCCCL were randomly divided into the training validation set and the test set in a ratio of 4:1.The Faster RCNN was used for deep learning of patients’data in the training validation set,and established a convolutional neural network model to distinguish PCCCL and CHCC.The accuracy,average precision,and the recall of the model for diagnosing PCCCL and CHCC were used to evaluate the detection performance of the Faster RCNN algorithm.Results:A total of 4392 images of 121 patients(1032 images of 33 patients with PCCCL and 3360 images of 88 patients with CHCC)were uesd in test set for deep learning and establishing the model,and 1072 images of 30 patients(320 images of nine patients with PCCCL and 752 images of 21 patients with CHCC)were used to test the model.The accuracy of the model for accurately diagnosing PCCCL and CHCC was 0.962(95%confidence interval[CI]:0.931-0.992).The average precision of the model for diagnosing PCCCL was 0.908(95%CI:0.823-0.993)and that for diagnosing CHCC was 0.907(95%CI:0.823-0.993).The recall of the model for diagnosing PCCCL was 0.951(95%CI:0.916-0.985)and that for diagnosing CHCC was 0.960(95%CI:0.854-0.962).The time to make a diagnosis using the model took an average of 4 s for each patient.Conclusion:The Faster RCNN model can accurately distinguish PCCCL and CHCC.This model could be important for clinicians to make appropriate treatment plans for patients with PCCCL or CHCC.