Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of...Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.展开更多
The recent COVID-19 pandemic caused by the novel coronavirus,severe acute respiratory syndrome coronavirus 2(SARS-CoV-2),has had a significant impact on human life and the economy around the world.A reverse transcript...The recent COVID-19 pandemic caused by the novel coronavirus,severe acute respiratory syndrome coronavirus 2(SARS-CoV-2),has had a significant impact on human life and the economy around the world.A reverse transcription polymerase chain reaction(RT-PCR)test is used to screen for this disease,but its low sensitivity means that it is not sufficient for early detection and treatment.As RT-PCR is a time-consuming procedure,there is interest in the introduction of automated techniques for diagnosis.Deep learning has a key role to play in the field of medical imaging.The most important issue in this area is the choice of key features.Here,we propose a set of deep learning features based on a system for automated classification of computed tomography(CT)images to identify COVID-19.Initially,this method was used to prepare a database of three classes:Pneumonia,COVID19,and Healthy.The dataset consisted of 6000 CT images refined by a hybrid contrast stretching approach.In the next step,two advanced deep learning models(ResNet50 and DarkNet53)were fine-tuned and trained through transfer learning.The features were extracted from the second last feature layer of both models and further optimized using a hybrid optimization approach.For each deep model,the Rao-1 algorithm and the PSO algorithm were combined in the hybrid approach.Later,the selected features were merged using the new minimum parallel distance non-redundant(PMDNR)approach.The final fused vector was finally classified using the extreme machine classifier.The experimental process was carried out on a set of prepared data with an overall accuracy of 95.6%.Comparing the different classification algorithms at the different levels of the features demonstrated the reliability of the proposed framework.展开更多
Background:In medical image analysis,the diagnosis of skin lesions remains a challenging task.Skin lesion is a common type of skin cancer that exists worldwide.Dermoscopy is one of the latest technologies used for the...Background:In medical image analysis,the diagnosis of skin lesions remains a challenging task.Skin lesion is a common type of skin cancer that exists worldwide.Dermoscopy is one of the latest technologies used for the diagnosis of skin cancer.Challenges:Many computerized methods have been introduced in the literature to classify skin cancers.However,challenges remain such as imbalanced datasets,low contrast lesions,and the extraction of irrelevant or redundant features.Proposed Work:In this study,a new technique is proposed based on the conventional and deep learning framework.The proposed framework consists of two major tasks:lesion segmentation and classification.In the lesion segmentation task,contrast is initially improved by the fusion of two filtering techniques and then performed a color transformation to color lesion area color discrimination.Subsequently,the best channel is selected and the lesion map is computed,which is further converted into a binary form using a thresholding function.In the lesion classification task,two pre-trained CNN models were modified and trained using transfer learning.Deep features were extracted from both models and fused using canonical correlation analysis.During the fusion process,a few redundant features were also added,lowering classification accuracy.A new technique called maximum entropy score-based selection(MESbS)is proposed as a solution to this issue.The features selected through this approach are fed into a cubic support vector machine(C-SVM)for the final classification.Results:The experimental process was conducted on two datasets:ISIC 2017 and HAM10000.The ISIC 2017 dataset was used for the lesion segmentation task,whereas the HAM10000 dataset was used for the classification task.The achieved accuracy for both datasets was 95.6% and 96.7%, respectively, which was higher thanthe existing techniques.展开更多
基金This study was supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare(HI18C1216)the grant of the National Research Foundation of Korea(NRF-2020R1I1A1A01074256)the Soonchunhyang University Research Fund.
文摘Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.
基金This research was supported by X-mind Corps program of National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT(No.2019H1D8A1105622)the Soonchunhyang University Research Fund.
文摘The recent COVID-19 pandemic caused by the novel coronavirus,severe acute respiratory syndrome coronavirus 2(SARS-CoV-2),has had a significant impact on human life and the economy around the world.A reverse transcription polymerase chain reaction(RT-PCR)test is used to screen for this disease,but its low sensitivity means that it is not sufficient for early detection and treatment.As RT-PCR is a time-consuming procedure,there is interest in the introduction of automated techniques for diagnosis.Deep learning has a key role to play in the field of medical imaging.The most important issue in this area is the choice of key features.Here,we propose a set of deep learning features based on a system for automated classification of computed tomography(CT)images to identify COVID-19.Initially,this method was used to prepare a database of three classes:Pneumonia,COVID19,and Healthy.The dataset consisted of 6000 CT images refined by a hybrid contrast stretching approach.In the next step,two advanced deep learning models(ResNet50 and DarkNet53)were fine-tuned and trained through transfer learning.The features were extracted from the second last feature layer of both models and further optimized using a hybrid optimization approach.For each deep model,the Rao-1 algorithm and the PSO algorithm were combined in the hybrid approach.Later,the selected features were merged using the new minimum parallel distance non-redundant(PMDNR)approach.The final fused vector was finally classified using the extreme machine classifier.The experimental process was carried out on a set of prepared data with an overall accuracy of 95.6%.Comparing the different classification algorithms at the different levels of the features demonstrated the reliability of the proposed framework.
文摘Background:In medical image analysis,the diagnosis of skin lesions remains a challenging task.Skin lesion is a common type of skin cancer that exists worldwide.Dermoscopy is one of the latest technologies used for the diagnosis of skin cancer.Challenges:Many computerized methods have been introduced in the literature to classify skin cancers.However,challenges remain such as imbalanced datasets,low contrast lesions,and the extraction of irrelevant or redundant features.Proposed Work:In this study,a new technique is proposed based on the conventional and deep learning framework.The proposed framework consists of two major tasks:lesion segmentation and classification.In the lesion segmentation task,contrast is initially improved by the fusion of two filtering techniques and then performed a color transformation to color lesion area color discrimination.Subsequently,the best channel is selected and the lesion map is computed,which is further converted into a binary form using a thresholding function.In the lesion classification task,two pre-trained CNN models were modified and trained using transfer learning.Deep features were extracted from both models and fused using canonical correlation analysis.During the fusion process,a few redundant features were also added,lowering classification accuracy.A new technique called maximum entropy score-based selection(MESbS)is proposed as a solution to this issue.The features selected through this approach are fed into a cubic support vector machine(C-SVM)for the final classification.Results:The experimental process was conducted on two datasets:ISIC 2017 and HAM10000.The ISIC 2017 dataset was used for the lesion segmentation task,whereas the HAM10000 dataset was used for the classification task.The achieved accuracy for both datasets was 95.6% and 96.7%, respectively, which was higher thanthe existing techniques.