Worldwide cotton is the most profitable cash crop.Each year the production of this crop suffers because of several diseases.At an early stage,computerized methods are used for disease detection that may reduce the los...Worldwide cotton is the most profitable cash crop.Each year the production of this crop suffers because of several diseases.At an early stage,computerized methods are used for disease detection that may reduce the loss in the production of cotton.Although several methods are proposed for the detection of cotton diseases,however,still there are limitations because of low-quality images,size,shape,variations in orientation,and complex background.Due to these factors,there is a need for novel methods for features extraction/selection for the accurate cotton disease classification.Therefore in this research,an optimized features fusion-based model is proposed,in which two pre-trained architectures called EfficientNet-b0 and Inception-v3 are utilized to extract features,each model extracts the feature vector of length N×1000.After that,the extracted features are serially concatenated having a feature vector lengthN×2000.Themost prominent features are selected usingEmperor PenguinOptimizer(EPO)method.The method is evaluated on two publically available datasets,such as Kaggle cotton disease dataset-I,and Kaggle cotton-leaf-infection-II.The EPO method returns the feature vector of length 1×755,and 1×824 using dataset-I,and dataset-II,respectively.The classification is performed using 5,7,and 10 folds cross-validation.The Quadratic Discriminant Analysis(QDA)classifier provides an accuracy of 98.9%on 5 fold,98.96%on 7 fold,and 99.07%on 10 fold using Kaggle cotton disease dataset-I while the Ensemble Subspace K Nearest Neighbor(KNN)provides 99.16%on 5 fold,98.99%on 7 fold,and 99.27%on 10 fold using Kaggle cotton-leaf-infection dataset-II.展开更多
Recognition of human gait is a difficult assignment,particularly for unobtrusive surveillance in a video and human identification from a large distance.Therefore,a method is proposed for the classification and recogni...Recognition of human gait is a difficult assignment,particularly for unobtrusive surveillance in a video and human identification from a large distance.Therefore,a method is proposed for the classification and recognition of different types of human gait.The proposed approach is consisting of two phases.In phase I,the new model is proposed named convolutional bidirectional long short-term memory(Conv-BiLSTM)to classify the video frames of human gait.In this model,features are derived through convolutional neural network(CNN)named ResNet-18 and supplied as an input to the LSTM model that provided more distinguishable temporal information.In phase II,the YOLOv2-squeezeNet model is designed,where deep features are extricated using the fireconcat-02 layer and fed/passed to the tinyYOLOv2 model for recognized/localized the human gaits with predicted scores.The proposed method achieved up to 90%correct prediction scores on CASIA-A,CASIA-B,and the CASIA-C benchmark datasets.The proposed method achieved better/improved prediction scores as compared to the recent existing works.展开更多
As they have nutritional,therapeutic,so values,plants were regarded as important and they’re the main source of humankind’s energy supply.Plant pathogens will affect its leaves at a certain time during crop cultivat...As they have nutritional,therapeutic,so values,plants were regarded as important and they’re the main source of humankind’s energy supply.Plant pathogens will affect its leaves at a certain time during crop cultivation,leading to substantial harm to crop productivity&economic selling price.In the agriculture industry,the identification of fungal diseases plays a vital role.However,it requires immense labor,greater planning time,and extensive knowledge of plant pathogens.Computerized approaches are developed and tested by different researchers to classify plant disease identification,and that in many cases they have also had important results several times.Therefore,the proposed study presents a new framework for the recognition of fruits and vegetable diseases.This work comprises of the two phases wherein the phase-I improved localization model is presented that comprises of the two different types of the deep learning models such asYouOnly Look Once(YOLO)v2 and Open Exchange Neural(ONNX)model.The localizationmodel is constructed by the combination of the deep features that are extracted from the ONNX model and features learning has been done through the convolutional-05 layer and transferred as input to the YOLOv2 model.The localized images passed as input to classify the different types of plant diseases.The classification model is constructed by ensembling the deep features learning,where features are extracted dimension of 1×1000 from pre-trained Efficientnetb0 model and supplied to next 07 layers of the convolutional neural network such as 01 features input,01 ReLU,01 Batch-normalization,02 fully-connected.The proposed model classifies the plant input images into associated labels with approximately 95%prediction scores that are far better as compared to current published work in this domain.展开更多
Coronavirus 19(COVID-19)can cause severe pneumonia that may be fatal.Correct diagnosis is essential.Computed tomography(CT)usefully detects symptoms of COVID-19 infection.In this retrospective study,we present an impr...Coronavirus 19(COVID-19)can cause severe pneumonia that may be fatal.Correct diagnosis is essential.Computed tomography(CT)usefully detects symptoms of COVID-19 infection.In this retrospective study,we present an improved framework for detection of COVID-19 infection on CT images;the steps include pre-processing,segmentation,feature extraction/fusion/selection,and classification.In the pre-processing phase,a Gabor wavelet filter is applied to enhance image intensities.A marker-based,watershed controlled approach with thresholding is used to isolate the lung region.In the segmentation phase,COVID-19 lesions are segmented using an encoder-/decoder-based deep learning model in which deepLabv3 serves as the bottleneck and mobilenetv2 as the classification head.DeepLabv3 is an effective decoder that helps to refine segmentation of lesion boundaries.The model was trained using fine-tuned hyperparameters selected after extensive experimentation.Subsequently,the Gray Level Co-occurrence Matrix(GLCM)features and statistical features including circularity,area,and perimeters were computed for each segmented image.The computed features were serially fused and the best features(those that were optimally discriminatory)selected using a Genetic Algorithm(GA)for classification.The performance of the method was evaluated using two benchmark datasets:The COVID-19 Segmentation and the POF Hospital datasets.The results were better than those of existing methods.展开更多
White blood cells(WBCs)are a vital part of the immune system that protect the body from different types of bacteria and viruses.Abnormal cell growth destroys the body’s immune system,and computerized methods play a v...White blood cells(WBCs)are a vital part of the immune system that protect the body from different types of bacteria and viruses.Abnormal cell growth destroys the body’s immune system,and computerized methods play a vital role in detecting abnormalities at the initial stage.In this research,a deep learning technique is proposed for the detection of leukemia.The proposed methodology consists of three phases.Phase I uses an open neural network exchange(ONNX)and YOLOv2 to localize WBCs.The localized images are passed to Phase II,in which 3D-segmentation is performed using deeplabv3 as a base network of the pre-trained Xception model.The segmented images are used in Phase III,in which features are extracted using the darknet-53 model and optimized using Bhattacharyya separately criteria to classify WBCs.The proposed methodology is validated on three publically available benchmark datasets,namely ALL-IDB1,ALL-IDB2,and LISC,in terms of different metrics,such as precision,accuracy,sensitivity,and dice scores.The results of the proposed method are comparable to those of recent existing methodologies,thus proving its effectiveness.展开更多
Malaria is a severe illness triggered by parasites that spreads via mosquito bites.In underdeveloped nations,malaria is one of the top causes of mortality,and it is mainly diagnosed through microscopy.Computer-assiste...Malaria is a severe illness triggered by parasites that spreads via mosquito bites.In underdeveloped nations,malaria is one of the top causes of mortality,and it is mainly diagnosed through microscopy.Computer-assisted malaria diagnosis is difficult owing to the fine-grained differences throughout the presentation of some uninfected and infected groups.Therefore,in this study,we present a new idea based on the ensemble quantum-classical framework for malaria classification.The methods comprise three core steps:localization,segmentation,and classification.In the first core step,an improved FRCNN model is proposed for the localization of the infected malaria cells.Then,the RGB localized images were converted into YCbCr channels to normalize the image intensity values.Subsequently,the actual lesion region was segmented using a histogram-based color thresholding approach.The segmented images were employed for classification in two different ways.In the first method,a CNN model is developed by the selection of optimum layers after extensive experimentation,and the final computed feature vector is passed to the softmax layer for classification of the infection/non-infection of themicroscopicmalaria images.Second,a quantum-convolutionalmodel is employed for informative feature extraction from microscopicmalaria images,and the extracted feature vectors are supplied to the softmax layer for classification.Finally,classification results were analyzed from two different models and concluded that the quantum-convolutional model achieved maximum accuracy as compared to CNN.The proposed models attain a precision rate greater than 90%,thereby proving that these models performed better than the existing models.展开更多
基金supported by the Technology Development Program of MSS[No.S3033853]by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A4A1031509).
文摘Worldwide cotton is the most profitable cash crop.Each year the production of this crop suffers because of several diseases.At an early stage,computerized methods are used for disease detection that may reduce the loss in the production of cotton.Although several methods are proposed for the detection of cotton diseases,however,still there are limitations because of low-quality images,size,shape,variations in orientation,and complex background.Due to these factors,there is a need for novel methods for features extraction/selection for the accurate cotton disease classification.Therefore in this research,an optimized features fusion-based model is proposed,in which two pre-trained architectures called EfficientNet-b0 and Inception-v3 are utilized to extract features,each model extracts the feature vector of length N×1000.After that,the extracted features are serially concatenated having a feature vector lengthN×2000.Themost prominent features are selected usingEmperor PenguinOptimizer(EPO)method.The method is evaluated on two publically available datasets,such as Kaggle cotton disease dataset-I,and Kaggle cotton-leaf-infection-II.The EPO method returns the feature vector of length 1×755,and 1×824 using dataset-I,and dataset-II,respectively.The classification is performed using 5,7,and 10 folds cross-validation.The Quadratic Discriminant Analysis(QDA)classifier provides an accuracy of 98.9%on 5 fold,98.96%on 7 fold,and 99.07%on 10 fold using Kaggle cotton disease dataset-I while the Ensemble Subspace K Nearest Neighbor(KNN)provides 99.16%on 5 fold,98.99%on 7 fold,and 99.27%on 10 fold using Kaggle cotton-leaf-infection dataset-II.
基金supported by the Korea Institute for Advancement of Technology(KIAT)Grant funded by the Korea Government(MOTIE)(P0012724,The Competency,Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘Recognition of human gait is a difficult assignment,particularly for unobtrusive surveillance in a video and human identification from a large distance.Therefore,a method is proposed for the classification and recognition of different types of human gait.The proposed approach is consisting of two phases.In phase I,the new model is proposed named convolutional bidirectional long short-term memory(Conv-BiLSTM)to classify the video frames of human gait.In this model,features are derived through convolutional neural network(CNN)named ResNet-18 and supplied as an input to the LSTM model that provided more distinguishable temporal information.In phase II,the YOLOv2-squeezeNet model is designed,where deep features are extricated using the fireconcat-02 layer and fed/passed to the tinyYOLOv2 model for recognized/localized the human gaits with predicted scores.The proposed method achieved up to 90%correct prediction scores on CASIA-A,CASIA-B,and the CASIA-C benchmark datasets.The proposed method achieved better/improved prediction scores as compared to the recent existing works.
基金This work was supported by the Soonchunhyang University Research Fund.
文摘As they have nutritional,therapeutic,so values,plants were regarded as important and they’re the main source of humankind’s energy supply.Plant pathogens will affect its leaves at a certain time during crop cultivation,leading to substantial harm to crop productivity&economic selling price.In the agriculture industry,the identification of fungal diseases plays a vital role.However,it requires immense labor,greater planning time,and extensive knowledge of plant pathogens.Computerized approaches are developed and tested by different researchers to classify plant disease identification,and that in many cases they have also had important results several times.Therefore,the proposed study presents a new framework for the recognition of fruits and vegetable diseases.This work comprises of the two phases wherein the phase-I improved localization model is presented that comprises of the two different types of the deep learning models such asYouOnly Look Once(YOLO)v2 and Open Exchange Neural(ONNX)model.The localizationmodel is constructed by the combination of the deep features that are extracted from the ONNX model and features learning has been done through the convolutional-05 layer and transferred as input to the YOLOv2 model.The localized images passed as input to classify the different types of plant diseases.The classification model is constructed by ensembling the deep features learning,where features are extracted dimension of 1×1000 from pre-trained Efficientnetb0 model and supplied to next 07 layers of the convolutional neural network such as 01 features input,01 ReLU,01 Batch-normalization,02 fully-connected.The proposed model classifies the plant input images into associated labels with approximately 95%prediction scores that are far better as compared to current published work in this domain.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘Coronavirus 19(COVID-19)can cause severe pneumonia that may be fatal.Correct diagnosis is essential.Computed tomography(CT)usefully detects symptoms of COVID-19 infection.In this retrospective study,we present an improved framework for detection of COVID-19 infection on CT images;the steps include pre-processing,segmentation,feature extraction/fusion/selection,and classification.In the pre-processing phase,a Gabor wavelet filter is applied to enhance image intensities.A marker-based,watershed controlled approach with thresholding is used to isolate the lung region.In the segmentation phase,COVID-19 lesions are segmented using an encoder-/decoder-based deep learning model in which deepLabv3 serves as the bottleneck and mobilenetv2 as the classification head.DeepLabv3 is an effective decoder that helps to refine segmentation of lesion boundaries.The model was trained using fine-tuned hyperparameters selected after extensive experimentation.Subsequently,the Gray Level Co-occurrence Matrix(GLCM)features and statistical features including circularity,area,and perimeters were computed for each segmented image.The computed features were serially fused and the best features(those that were optimally discriminatory)selected using a Genetic Algorithm(GA)for classification.The performance of the method was evaluated using two benchmark datasets:The COVID-19 Segmentation and the POF Hospital datasets.The results were better than those of existing methods.
基金This research was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘White blood cells(WBCs)are a vital part of the immune system that protect the body from different types of bacteria and viruses.Abnormal cell growth destroys the body’s immune system,and computerized methods play a vital role in detecting abnormalities at the initial stage.In this research,a deep learning technique is proposed for the detection of leukemia.The proposed methodology consists of three phases.Phase I uses an open neural network exchange(ONNX)and YOLOv2 to localize WBCs.The localized images are passed to Phase II,in which 3D-segmentation is performed using deeplabv3 as a base network of the pre-trained Xception model.The segmented images are used in Phase III,in which features are extracted using the darknet-53 model and optimized using Bhattacharyya separately criteria to classify WBCs.The proposed methodology is validated on three publically available benchmark datasets,namely ALL-IDB1,ALL-IDB2,and LISC,in terms of different metrics,such as precision,accuracy,sensitivity,and dice scores.The results of the proposed method are comparable to those of recent existing methodologies,thus proving its effectiveness.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.NRF-2021R1A2C1010362)and the Soonchunhyang University Research Fund.
文摘Malaria is a severe illness triggered by parasites that spreads via mosquito bites.In underdeveloped nations,malaria is one of the top causes of mortality,and it is mainly diagnosed through microscopy.Computer-assisted malaria diagnosis is difficult owing to the fine-grained differences throughout the presentation of some uninfected and infected groups.Therefore,in this study,we present a new idea based on the ensemble quantum-classical framework for malaria classification.The methods comprise three core steps:localization,segmentation,and classification.In the first core step,an improved FRCNN model is proposed for the localization of the infected malaria cells.Then,the RGB localized images were converted into YCbCr channels to normalize the image intensity values.Subsequently,the actual lesion region was segmented using a histogram-based color thresholding approach.The segmented images were employed for classification in two different ways.In the first method,a CNN model is developed by the selection of optimum layers after extensive experimentation,and the final computed feature vector is passed to the softmax layer for classification of the infection/non-infection of themicroscopicmalaria images.Second,a quantum-convolutionalmodel is employed for informative feature extraction from microscopicmalaria images,and the extracted feature vectors are supplied to the softmax layer for classification.Finally,classification results were analyzed from two different models and concluded that the quantum-convolutional model achieved maximum accuracy as compared to CNN.The proposed models attain a precision rate greater than 90%,thereby proving that these models performed better than the existing models.