The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place i...The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially.This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network(F-RCNN).The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions.Furthermore,image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation.The permanent changes in climate are of serious concern.The leading causes beyond these destructive variations are ozone layer depletion,greenhouse gas release,deforestation,pollution,water resources contamination,and UV radiation.This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues,e.g.,skin cancer,damage to marine life,crops damage,and impacts on living being’s immune systems.We have tried to classify the ozone images dataset into two major classes,depleted and non-depleted regions,to extract the required persuading features through F-RCNN.Furthermore,CNN has been used for feature extraction in the existing literature,and those extricated diverse RoIs are passed on to the CNN for grouping purposes.It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results.The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91%to 93%in identifying climate variation through ozone concentration classification,whether the region in the image under consideration is depleted or non-depleted.Our proposed model presented 93%accuracy,and it outperforms the prevailing techniques.展开更多
Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to e...Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to establish a Faster region-based convolutional neural network(RCNN)model for the accurate differential diagnosis of PCCCL and CHCC.Methods:In this study,we collected the data of 62 patients with PCCCL and 1079 patients with CHCC in Beijing YouAn Hospital from June 2012 to May 2020.A total of 109 patients with CHCC and 42 patients with PCCCL were randomly divided into the training validation set and the test set in a ratio of 4:1.The Faster RCNN was used for deep learning of patients’data in the training validation set,and established a convolutional neural network model to distinguish PCCCL and CHCC.The accuracy,average precision,and the recall of the model for diagnosing PCCCL and CHCC were used to evaluate the detection performance of the Faster RCNN algorithm.Results:A total of 4392 images of 121 patients(1032 images of 33 patients with PCCCL and 3360 images of 88 patients with CHCC)were uesd in test set for deep learning and establishing the model,and 1072 images of 30 patients(320 images of nine patients with PCCCL and 752 images of 21 patients with CHCC)were used to test the model.The accuracy of the model for accurately diagnosing PCCCL and CHCC was 0.962(95%confidence interval[CI]:0.931-0.992).The average precision of the model for diagnosing PCCCL was 0.908(95%CI:0.823-0.993)and that for diagnosing CHCC was 0.907(95%CI:0.823-0.993).The recall of the model for diagnosing PCCCL was 0.951(95%CI:0.916-0.985)and that for diagnosing CHCC was 0.960(95%CI:0.854-0.962).The time to make a diagnosis using the model took an average of 4 s for each patient.Conclusion:The Faster RCNN model can accurately distinguish PCCCL and CHCC.This model could be important for clinicians to make appropriate treatment plans for patients with PCCCL or CHCC.展开更多
Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique sys...Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique system,allowing this system to read computed tomography(CT)images correctly and make diagnosis of pancreatic cancer faster.Methods:The establishment of the artificial intelligence(AI)system for pancreatic cancer diagnosis based on sequential contrastenhanced CT images were composed of two processes:training and verification.During training process,our study used all 4385 CT images from 238 pancreatic cancer patients in the database as the training data set.Additionally,we used VGG16,which was pretrained in ImageNet and contained 13 convolutional layers and three fully connected layers,to initialize the feature extraction network.In the verification experiment,we used sequential clinical CT images from 238 pancreatic cancer patients as our experimental data and input these data into the faster region-based convolution network(Faster R-CNN)model that had completed training.Totally,1699 images from 100 pancreatic cancer patients were included for clinical verification.Results:A total of 338 patients with pancreatic cancer were included in the study.The clinical characteristics(sex,age,tumor location,differentiation grade,and tumor-node-metastasis stage)between the two training and verification groups were insignificant.The mean average precision was 0.7664,indicating a good training ejffect of the Faster R-CNN.Sequential contrastenhanced CT images of 100 pancreatic cancer patients were used for clinical verification.The area under the receiver operating characteristic curve calculated according to the trapezoidal rule was 0.9632.It took approximately 0.2 s for the Faster R-CNN AI to automatically process one CT image,which is much faster than the time required for diagnosis by an imaging specialist.Conclusions:Faster R-CNN AI is an effective and objective method with high accuracy for the diagnosis of pancreatic cancer.展开更多
In order to solve the problem of small objects detection in unmanned aerial vehicle(UAV)aerial images with complex background,a general detection method for multi-scale small objects based on Faster region-based convo...In order to solve the problem of small objects detection in unmanned aerial vehicle(UAV)aerial images with complex background,a general detection method for multi-scale small objects based on Faster region-based convolutional neural network(Faster R-CNN)is proposed.The bird’s nest on the high-voltage tower is taken as the research object.Firstly,we use the improved convolutional neural network ResNet101 to extract object features,and then use multi-scale sliding windows to obtain the object region proposals on the convolution feature maps with different resolutions.Finally,a deconvolution operation is added to further enhance the selected feature map with higher resolution,and then it taken as a feature mapping layer of the region proposals passing to the object detection sub-network.The detection results of the bird’s nest in UAV aerial images show that the proposed method can precisely detect small objects in aerial images.展开更多
In order to improve the accuracy of threaded hole object detection,combining a dual camera vision system with the Hough transform circle detection,we propose an object detection method of artifact threaded hole based ...In order to improve the accuracy of threaded hole object detection,combining a dual camera vision system with the Hough transform circle detection,we propose an object detection method of artifact threaded hole based on Faster region-ased convolutional neural network(Faster R-CNN).First,a dual camera image acquisition system is established.One industrial camera placed at a high position is responsible for collecting the whole image of the workpiece,and the suspected screw hole position on the workpiece can be preliminarily selected by Hough transform detection algorithm.Then,the other industrial camera is responsible for collecting the local images of the suspected screw holes that have been detected by Hough transform one by one.After that,ResNet50-based Faster R-CNN object detection model is trained on the self-built screw hole data set.Finally,the local image of the threaded hole is input into the trained Faster R-CNN object detection model for further identification and location.The experimental results show that the proposed method can effectively avoid small object detection of threaded holes,and compared with the method that only uses Hough transform or Faster RCNN object detection alone,it has high recognition and positioning accuracy.展开更多
For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the ...For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the object characteristics in the foggy environment in the training set,and the detection effect is not good.To improve the traffic object detection in foggy environment,we propose a method of generating foggy images on fog-free images from the perspective of data set construction.First,taking the KITTI objection detection data set as an original fog-free image,we generate the depth image of the original image by using improved Monodepth unsupervised depth estimation method.Then,a geometric prior depth template is constructed to fuse the image entropy taken as weight with the depth image.After that,a foggy image is acquired from the depth image based on the atmospheric scattering model.Finally,we take two typical object-detection frameworks,that is,the two-stage object-detection Fster region-based convolutional neural network(Faster-RCNN)and the one-stage object-detection network YOLOv4,to train the original data set,the foggy data set and the mixed data set,respectively.According to the test results on RESIDE-RTTS data set in the outdoor natural foggy environment,the model under the training on the mixed data set shows the best effect.The mean average precision(mAP)values are increased by 5.6%and by 5.0%under the YOLOv4 model and the Faster-RCNN network,respectively.It is proved that the proposed method can effectively improve object identification ability foggy environment.展开更多
Background:Artificial intelligence-assisted image recognition technology is currently able to detect the target area of an image and fetch information to make classifications according to target features.This study ai...Background:Artificial intelligence-assisted image recognition technology is currently able to detect the target area of an image and fetch information to make classifications according to target features.This study aimed to use deep neural netAVorks for computed tomography(CT)diagnosis of perigastric metastatic lymph nodes(PGMLNs)to simulate the recognition of lymph nodes by radiologists,and to acquire more accurate identification results.Methods:A total of 1371 images of suspected lymph node metastasis from enhanced abdominal CT scans were identified and labeled by radiologists and were used with 18,780 original images for faster region-based convolutional neural networks(FR-CNN)deep learning.The identification results of 6000 random CT images from 100 gastric cancer patients by the FR-CNN were compared with results obtained from radiologists in terms of their identification accuracy.Similarly,1004 CT images with metastatic lymph nodes that had been post-operatively confirmed by pathological examination and 11,340 original images were used in the identification and learning processes described above.The same 6000 gastric cancer CT images were used for the verification,according to which the diagnosis results were analyzed.Results:In the initial group,precision-recall curves were generated based on the precision rates,the recall rates of nodule classes of the training set and the validation set;the mean average precision(mAP)value was 0.5019.To verify the results of the initial learning group,the receiver operating characteristic curves was generated,and the corresponding area under the curve(AUC)value was calculated as 0.8995.After the second phase of precise learning,all the indicators were improved,and the mAP and AUC values were 0.7801 and 0.9541,respectively.Conclusion:Through deep learning,FR-CNN achieved high judgment effectiveness and recognition accuracy for CT diagnosis of PGMLNs.展开更多
文摘The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially.This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network(F-RCNN).The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions.Furthermore,image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation.The permanent changes in climate are of serious concern.The leading causes beyond these destructive variations are ozone layer depletion,greenhouse gas release,deforestation,pollution,water resources contamination,and UV radiation.This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues,e.g.,skin cancer,damage to marine life,crops damage,and impacts on living being’s immune systems.We have tried to classify the ozone images dataset into two major classes,depleted and non-depleted regions,to extract the required persuading features through F-RCNN.Furthermore,CNN has been used for feature extraction in the existing literature,and those extricated diverse RoIs are passed on to the CNN for grouping purposes.It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results.The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91%to 93%in identifying climate variation through ozone concentration classification,whether the region in the image under consideration is depleted or non-depleted.Our proposed model presented 93%accuracy,and it outperforms the prevailing techniques.
文摘Background:Distinguishing between primary clear cell carcinoma of the liver(PCCCL)and common hepatocellular carcinoma(CHCC)through traditional inspection methods before the operation is difficult.This study aimed to establish a Faster region-based convolutional neural network(RCNN)model for the accurate differential diagnosis of PCCCL and CHCC.Methods:In this study,we collected the data of 62 patients with PCCCL and 1079 patients with CHCC in Beijing YouAn Hospital from June 2012 to May 2020.A total of 109 patients with CHCC and 42 patients with PCCCL were randomly divided into the training validation set and the test set in a ratio of 4:1.The Faster RCNN was used for deep learning of patients’data in the training validation set,and established a convolutional neural network model to distinguish PCCCL and CHCC.The accuracy,average precision,and the recall of the model for diagnosing PCCCL and CHCC were used to evaluate the detection performance of the Faster RCNN algorithm.Results:A total of 4392 images of 121 patients(1032 images of 33 patients with PCCCL and 3360 images of 88 patients with CHCC)were uesd in test set for deep learning and establishing the model,and 1072 images of 30 patients(320 images of nine patients with PCCCL and 752 images of 21 patients with CHCC)were used to test the model.The accuracy of the model for accurately diagnosing PCCCL and CHCC was 0.962(95%confidence interval[CI]:0.931-0.992).The average precision of the model for diagnosing PCCCL was 0.908(95%CI:0.823-0.993)and that for diagnosing CHCC was 0.907(95%CI:0.823-0.993).The recall of the model for diagnosing PCCCL was 0.951(95%CI:0.916-0.985)and that for diagnosing CHCC was 0.960(95%CI:0.854-0.962).The time to make a diagnosis using the model took an average of 4 s for each patient.Conclusion:The Faster RCNN model can accurately distinguish PCCCL and CHCC.This model could be important for clinicians to make appropriate treatment plans for patients with PCCCL or CHCC.
基金This work was supported by grants from the National Natural Science Foundation of China(No.81802888)the Key Research and Development Project of Shandong Province(No.2018GSF118206 and No.2018GSF118088).
文摘Background:Early diagnosis and accurate staging are important to improve the cure rate and prognosis for pancreatic cancer.This study was performed to develop an automatic and accurate imaging processing technique system,allowing this system to read computed tomography(CT)images correctly and make diagnosis of pancreatic cancer faster.Methods:The establishment of the artificial intelligence(AI)system for pancreatic cancer diagnosis based on sequential contrastenhanced CT images were composed of two processes:training and verification.During training process,our study used all 4385 CT images from 238 pancreatic cancer patients in the database as the training data set.Additionally,we used VGG16,which was pretrained in ImageNet and contained 13 convolutional layers and three fully connected layers,to initialize the feature extraction network.In the verification experiment,we used sequential clinical CT images from 238 pancreatic cancer patients as our experimental data and input these data into the faster region-based convolution network(Faster R-CNN)model that had completed training.Totally,1699 images from 100 pancreatic cancer patients were included for clinical verification.Results:A total of 338 patients with pancreatic cancer were included in the study.The clinical characteristics(sex,age,tumor location,differentiation grade,and tumor-node-metastasis stage)between the two training and verification groups were insignificant.The mean average precision was 0.7664,indicating a good training ejffect of the Faster R-CNN.Sequential contrastenhanced CT images of 100 pancreatic cancer patients were used for clinical verification.The area under the receiver operating characteristic curve calculated according to the trapezoidal rule was 0.9632.It took approximately 0.2 s for the Faster R-CNN AI to automatically process one CT image,which is much faster than the time required for diagnosis by an imaging specialist.Conclusions:Faster R-CNN AI is an effective and objective method with high accuracy for the diagnosis of pancreatic cancer.
基金National Defense Pre-research Fund Project(No.KMGY318002531)。
文摘In order to solve the problem of small objects detection in unmanned aerial vehicle(UAV)aerial images with complex background,a general detection method for multi-scale small objects based on Faster region-based convolutional neural network(Faster R-CNN)is proposed.The bird’s nest on the high-voltage tower is taken as the research object.Firstly,we use the improved convolutional neural network ResNet101 to extract object features,and then use multi-scale sliding windows to obtain the object region proposals on the convolution feature maps with different resolutions.Finally,a deconvolution operation is added to further enhance the selected feature map with higher resolution,and then it taken as a feature mapping layer of the region proposals passing to the object detection sub-network.The detection results of the bird’s nest in UAV aerial images show that the proposed method can precisely detect small objects in aerial images.
文摘In order to improve the accuracy of threaded hole object detection,combining a dual camera vision system with the Hough transform circle detection,we propose an object detection method of artifact threaded hole based on Faster region-ased convolutional neural network(Faster R-CNN).First,a dual camera image acquisition system is established.One industrial camera placed at a high position is responsible for collecting the whole image of the workpiece,and the suspected screw hole position on the workpiece can be preliminarily selected by Hough transform detection algorithm.Then,the other industrial camera is responsible for collecting the local images of the suspected screw holes that have been detected by Hough transform one by one.After that,ResNet50-based Faster R-CNN object detection model is trained on the self-built screw hole data set.Finally,the local image of the threaded hole is input into the trained Faster R-CNN object detection model for further identification and location.The experimental results show that the proposed method can effectively avoid small object detection of threaded holes,and compared with the method that only uses Hough transform or Faster RCNN object detection alone,it has high recognition and positioning accuracy.
文摘For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the object characteristics in the foggy environment in the training set,and the detection effect is not good.To improve the traffic object detection in foggy environment,we propose a method of generating foggy images on fog-free images from the perspective of data set construction.First,taking the KITTI objection detection data set as an original fog-free image,we generate the depth image of the original image by using improved Monodepth unsupervised depth estimation method.Then,a geometric prior depth template is constructed to fuse the image entropy taken as weight with the depth image.After that,a foggy image is acquired from the depth image based on the atmospheric scattering model.Finally,we take two typical object-detection frameworks,that is,the two-stage object-detection Fster region-based convolutional neural network(Faster-RCNN)and the one-stage object-detection network YOLOv4,to train the original data set,the foggy data set and the mixed data set,respectively.According to the test results on RESIDE-RTTS data set in the outdoor natural foggy environment,the model under the training on the mixed data set shows the best effect.The mean average precision(mAP)values are increased by 5.6%and by 5.0%under the YOLOv4 model and the Faster-RCNN network,respectively.It is proved that the proposed method can effectively improve object identification ability foggy environment.
文摘Background:Artificial intelligence-assisted image recognition technology is currently able to detect the target area of an image and fetch information to make classifications according to target features.This study aimed to use deep neural netAVorks for computed tomography(CT)diagnosis of perigastric metastatic lymph nodes(PGMLNs)to simulate the recognition of lymph nodes by radiologists,and to acquire more accurate identification results.Methods:A total of 1371 images of suspected lymph node metastasis from enhanced abdominal CT scans were identified and labeled by radiologists and were used with 18,780 original images for faster region-based convolutional neural networks(FR-CNN)deep learning.The identification results of 6000 random CT images from 100 gastric cancer patients by the FR-CNN were compared with results obtained from radiologists in terms of their identification accuracy.Similarly,1004 CT images with metastatic lymph nodes that had been post-operatively confirmed by pathological examination and 11,340 original images were used in the identification and learning processes described above.The same 6000 gastric cancer CT images were used for the verification,according to which the diagnosis results were analyzed.Results:In the initial group,precision-recall curves were generated based on the precision rates,the recall rates of nodule classes of the training set and the validation set;the mean average precision(mAP)value was 0.5019.To verify the results of the initial learning group,the receiver operating characteristic curves was generated,and the corresponding area under the curve(AUC)value was calculated as 0.8995.After the second phase of precise learning,all the indicators were improved,and the mAP and AUC values were 0.7801 and 0.9541,respectively.Conclusion:Through deep learning,FR-CNN achieved high judgment effectiveness and recognition accuracy for CT diagnosis of PGMLNs.