The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place i...The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially.This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network(F-RCNN).The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions.Furthermore,image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation.The permanent changes in climate are of serious concern.The leading causes beyond these destructive variations are ozone layer depletion,greenhouse gas release,deforestation,pollution,water resources contamination,and UV radiation.This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues,e.g.,skin cancer,damage to marine life,crops damage,and impacts on living being’s immune systems.We have tried to classify the ozone images dataset into two major classes,depleted and non-depleted regions,to extract the required persuading features through F-RCNN.Furthermore,CNN has been used for feature extraction in the existing literature,and those extricated diverse RoIs are passed on to the CNN for grouping purposes.It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results.The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91%to 93%in identifying climate variation through ozone concentration classification,whether the region in the image under consideration is depleted or non-depleted.Our proposed model presented 93%accuracy,and it outperforms the prevailing techniques.展开更多
An otoscope is traditionally used to examine the eardrum and ear canal.A diagnosis of otitis media(OM)relies on the experience of clinicians.If an examiner lacks experience,the examination may be difficult and time-co...An otoscope is traditionally used to examine the eardrum and ear canal.A diagnosis of otitis media(OM)relies on the experience of clinicians.If an examiner lacks experience,the examination may be difficult and time-consuming.This paper presents an ear disease classification method using middle ear images based on a convolutional neural network(CNN).Especially the segmentation and classification networks are used to classify an otoscopic image into six classes:normal,acute otitis media(AOM),otitis media with effusion(OME),chronic otitis media(COM),congenital cholesteatoma(CC)and traumatic perforations(TMPs).The Mask R-CNN is utilized for the segmentation network to extract the region of interest(ROI)from otoscopic images.The extracted ROIs are used as guiding features for the classification.The classification is based on transfer learning with an ensemble of two CNN classifiers:EfficientNetB0 and Inception-V3.The proposed model was trained with a 5-fold cross-validation technique.The proposed method was evaluated and achieved a classification accuracy of 97.29%.展开更多
Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to e...Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to establish a Faster region-based convolutional neural network(RCNN)model for the accurate differential diagnosis of PCCCL and CHCC.Methods:In this study,we collected the data of 62 patients with PCCCL and 1079 patients with CHCC in Beijing YouAn Hospital from June 2012 to May 2020.A total of 109 patients with CHCC and 42 patients with PCCCL were randomly divided into the training validation set and the test set in a ratio of 4:1.The Faster RCNN was used for deep learning of patients’data in the training validation set,and established a convolutional neural network model to distinguish PCCCL and CHCC.The accuracy,average precision,and the recall of the model for diagnosing PCCCL and CHCC were used to evaluate the detection performance of the Faster RCNN algorithm.Results:A total of 4392 images of 121 patients(1032 images of 33 patients with PCCCL and 3360 images of 88 patients with CHCC)were uesd in test set for deep learning and establishing the model,and 1072 images of 30 patients(320 images of nine patients with PCCCL and 752 images of 21 patients with CHCC)were used to test the model.The accuracy of the model for accurately diagnosing PCCCL and CHCC was 0.962(95%confidence interval[CI]:0.931-0.992).The average precision of the model for diagnosing PCCCL was 0.908(95%CI:0.823-0.993)and that for diagnosing CHCC was 0.907(95%CI:0.823-0.993).The recall of the model for diagnosing PCCCL was 0.951(95%CI:0.916-0.985)and that for diagnosing CHCC was 0.960(95%CI:0.854-0.962).The time to make a diagnosis using the model took an average of 4 s for each patient.Conclusion:The Faster RCNN model can accurately distinguish PCCCL and CHCC.This model could be important for clinicians to make appropriate treatment plans for patients with PCCCL or CHCC.展开更多
Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it...Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it is impossible to ensure that people wear face masks;automated systems are a much superior option for face mask detection and monitoring.This paper introduces a simple and efficient approach for masked face detection.The architecture of the proposed approach is very straightforward;it combines deep learning and local binary patterns to extract features and classify themasmasked or unmasked.The proposed systemrequires hardware withminimal power consumption compared to state-of-the-art deep learning algorithms.Our proposed system maintains two steps.At first,this work extracted the local features of an image by using a local binary pattern descriptor,and then we used deep learning to extract global features.The proposed approach has achieved excellent accuracy and high performance.The performance of the proposed method was tested on three benchmark datasets:the realworld masked faces dataset(RMFD),the simulated masked faces dataset(SMFD),and labeled faces in the wild(LFW).Performancemetrics for the proposed technique weremeasured in terms of accuracy,precision,recall,and F1-score.Results indicated the efficiency of the proposed technique,providing accuracies of 99.86%,99.98%,and 100%for RMFD,SMFD,and LFW,respectively.Moreover,the proposed method outperformed state-of-the-art deep learning methods in the recent bibliography for the same problem under study and on the same evaluation datasets.展开更多
Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique sys...Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique system,allowing this system to read computed tomography(CT)images correctly and make diagnosis of pancreatic cancer faster.Methods:The establishment of the artificial intelligence(AI)system for pancreatic cancer diagnosis based on sequential contrastenhanced CT images were composed of two processes:training and verification.During training process,our study used all 4385 CT images from 238 pancreatic cancer patients in the database as the training data set.Additionally,we used VGG16,which was pretrained in ImageNet and contained 13 convolutional layers and three fully connected layers,to initialize the feature extraction network.In the verification experiment,we used sequential clinical CT images from 238 pancreatic cancer patients as our experimental data and input these data into the faster region-based convolution network(Faster R-CNN)model that had completed training.Totally,1699 images from 100 pancreatic cancer patients were included for clinical verification.Results:A total of 338 patients with pancreatic cancer were included in the study.The clinical characteristics(sex,age,tumor location,differentiation grade,and tumor-node-metastasis stage)between the two training and verification groups were insignificant.The mean average precision was 0.7664,indicating a good training ejffect of the Faster R-CNN.Sequential contrastenhanced CT images of 100 pancreatic cancer patients were used for clinical verification.The area under the receiver operating characteristic curve calculated according to the trapezoidal rule was 0.9632.It took approximately 0.2 s for the Faster R-CNN AI to automatically process one CT image,which is much faster than the time required for diagnosis by an imaging specialist.Conclusions:Faster R-CNN AI is an effective and objective method with high accuracy for the diagnosis of pancreatic cancer.展开更多
文摘The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially.This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network(F-RCNN).The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions.Furthermore,image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation.The permanent changes in climate are of serious concern.The leading causes beyond these destructive variations are ozone layer depletion,greenhouse gas release,deforestation,pollution,water resources contamination,and UV radiation.This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues,e.g.,skin cancer,damage to marine life,crops damage,and impacts on living being’s immune systems.We have tried to classify the ozone images dataset into two major classes,depleted and non-depleted regions,to extract the required persuading features through F-RCNN.Furthermore,CNN has been used for feature extraction in the existing literature,and those extricated diverse RoIs are passed on to the CNN for grouping purposes.It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results.The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91%to 93%in identifying climate variation through ozone concentration classification,whether the region in the image under consideration is depleted or non-depleted.Our proposed model presented 93%accuracy,and it outperforms the prevailing techniques.
基金This study was supported by a Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT&Future Planning NRF-2020R1A2C1014829the Soonchunhyang University Research Fund.
文摘An otoscope is traditionally used to examine the eardrum and ear canal.A diagnosis of otitis media(OM)relies on the experience of clinicians.If an examiner lacks experience,the examination may be difficult and time-consuming.This paper presents an ear disease classification method using middle ear images based on a convolutional neural network(CNN).Especially the segmentation and classification networks are used to classify an otoscopic image into six classes:normal,acute otitis media(AOM),otitis media with effusion(OME),chronic otitis media(COM),congenital cholesteatoma(CC)and traumatic perforations(TMPs).The Mask R-CNN is utilized for the segmentation network to extract the region of interest(ROI)from otoscopic images.The extracted ROIs are used as guiding features for the classification.The classification is based on transfer learning with an ensemble of two CNN classifiers:EfficientNetB0 and Inception-V3.The proposed model was trained with a 5-fold cross-validation technique.The proposed method was evaluated and achieved a classification accuracy of 97.29%.
文摘Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to establish a Faster region-based convolutional neural network(RCNN)model for the accurate differential diagnosis of PCCCL and CHCC.Methods:In this study,we collected the data of 62 patients with PCCCL and 1079 patients with CHCC in Beijing YouAn Hospital from June 2012 to May 2020.A total of 109 patients with CHCC and 42 patients with PCCCL were randomly divided into the training validation set and the test set in a ratio of 4:1.The Faster RCNN was used for deep learning of patients’data in the training validation set,and established a convolutional neural network model to distinguish PCCCL and CHCC.The accuracy,average precision,and the recall of the model for diagnosing PCCCL and CHCC were used to evaluate the detection performance of the Faster RCNN algorithm.Results:A total of 4392 images of 121 patients(1032 images of 33 patients with PCCCL and 3360 images of 88 patients with CHCC)were uesd in test set for deep learning and establishing the model,and 1072 images of 30 patients(320 images of nine patients with PCCCL and 752 images of 21 patients with CHCC)were used to test the model.The accuracy of the model for accurately diagnosing PCCCL and CHCC was 0.962(95%confidence interval[CI]:0.931-0.992).The average precision of the model for diagnosing PCCCL was 0.908(95%CI:0.823-0.993)and that for diagnosing CHCC was 0.907(95%CI:0.823-0.993).The recall of the model for diagnosing PCCCL was 0.951(95%CI:0.916-0.985)and that for diagnosing CHCC was 0.960(95%CI:0.854-0.962).The time to make a diagnosis using the model took an average of 4 s for each patient.Conclusion:The Faster RCNN model can accurately distinguish PCCCL and CHCC.This model could be important for clinicians to make appropriate treatment plans for patients with PCCCL or CHCC.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R442),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it is impossible to ensure that people wear face masks;automated systems are a much superior option for face mask detection and monitoring.This paper introduces a simple and efficient approach for masked face detection.The architecture of the proposed approach is very straightforward;it combines deep learning and local binary patterns to extract features and classify themasmasked or unmasked.The proposed systemrequires hardware withminimal power consumption compared to state-of-the-art deep learning algorithms.Our proposed system maintains two steps.At first,this work extracted the local features of an image by using a local binary pattern descriptor,and then we used deep learning to extract global features.The proposed approach has achieved excellent accuracy and high performance.The performance of the proposed method was tested on three benchmark datasets:the realworld masked faces dataset(RMFD),the simulated masked faces dataset(SMFD),and labeled faces in the wild(LFW).Performancemetrics for the proposed technique weremeasured in terms of accuracy,precision,recall,and F1-score.Results indicated the efficiency of the proposed technique,providing accuracies of 99.86%,99.98%,and 100%for RMFD,SMFD,and LFW,respectively.Moreover,the proposed method outperformed state-of-the-art deep learning methods in the recent bibliography for the same problem under study and on the same evaluation datasets.
基金This work was supported by grants from the National Natural Science Foundation of China(No.81802888)the Key Research and Development Project of Shandong Province(No.2018GSF118206 and No.2018GSF118088).
文摘Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique system,allowing this system to read computed tomography(CT)images correctly and make diagnosis of pancreatic cancer faster.Methods:The establishment of the artificial intelligence(AI)system for pancreatic cancer diagnosis based on sequential contrastenhanced CT images were composed of two processes:training and verification.During training process,our study used all 4385 CT images from 238 pancreatic cancer patients in the database as the training data set.Additionally,we used VGG16,which was pretrained in ImageNet and contained 13 convolutional layers and three fully connected layers,to initialize the feature extraction network.In the verification experiment,we used sequential clinical CT images from 238 pancreatic cancer patients as our experimental data and input these data into the faster region-based convolution network(Faster R-CNN)model that had completed training.Totally,1699 images from 100 pancreatic cancer patients were included for clinical verification.Results:A total of 338 patients with pancreatic cancer were included in the study.The clinical characteristics(sex,age,tumor location,differentiation grade,and tumor-node-metastasis stage)between the two training and verification groups were insignificant.The mean average precision was 0.7664,indicating a good training ejffect of the Faster R-CNN.Sequential contrastenhanced CT images of 100 pancreatic cancer patients were used for clinical verification.The area under the receiver operating characteristic curve calculated according to the trapezoidal rule was 0.9632.It took approximately 0.2 s for the Faster R-CNN AI to automatically process one CT image,which is much faster than the time required for diagnosis by an imaging specialist.Conclusions:Faster R-CNN AI is an effective and objective method with high accuracy for the diagnosis of pancreatic cancer.