Recently,Internet of Medical Things(IoMT)has gained considerable attention to provide improved healthcare services to patients.Since earlier diag-nosis of brain tumor(BT)using medical imaging becomes an essential task...Recently,Internet of Medical Things(IoMT)has gained considerable attention to provide improved healthcare services to patients.Since earlier diag-nosis of brain tumor(BT)using medical imaging becomes an essential task,auto-mated IoMT and cloud enabled BT diagnosis model can be devised using recent deep learning models.With this motivation,this paper introduces a novel IoMT and cloud enabled BT diagnosis model,named IoMTC-HDBT.The IoMTC-HDBT model comprises the data acquisition process by the use of IoMT devices which captures the magnetic resonance imaging(MRI)brain images and transmit them to the cloud server.Besides,adaptive windowfiltering(AWF)based image preprocessing is used to remove noise.In addition,the cloud server executes the disease diagnosis model which includes the sparrow search algorithm(SSA)with GoogleNet(SSA-GN)model.The IoMTC-HDBT model applies functional link neural network(FLNN),which has the ability to detect and classify the MRI brain images as normal or abnormal.Itfinds useful to generate the reports instantly for patients located in remote areas.The validation of the IoMTC-HDBT model takes place against BRATS2015 Challenge dataset and the experimental analysis is car-ried out interms of sensitivity,accuracy,and specificity.The experimentation out-come pointed out the betterment of the proposed model with the accuracy of 0.984.展开更多
Big data applications in healthcare have provided a variety of solutions to reduce costs,errors,and waste.This work aims to develop a real-time system based on big medical data processing in the cloud for the predicti...Big data applications in healthcare have provided a variety of solutions to reduce costs,errors,and waste.This work aims to develop a real-time system based on big medical data processing in the cloud for the prediction of health issues.In the proposed scalable system,medical parameters are sent to Apache Spark to extract attributes from data and apply the proposed machine learning algorithm.In this way,healthcare risks can be predicted and sent as alerts and recommendations to users and healthcare providers.The proposed work also aims to provide an effective recommendation system by using streaming medical data,historical data on a user’s profile,and a knowledge database to make themost appropriate real-time recommendations and alerts based on the sensor’s measurements.This proposed scalable system works by tweeting the health status attributes of users.Their cloud profile receives the streaming healthcare data in real time by extracting the health attributes via a machine learning prediction algorithm to predict the users’health status.Subsequently,their status can be sent on demand to healthcare providers.Therefore,machine learning algorithms can be applied to stream health care data from wearables and provide users with insights into their health status.These algorithms can help healthcare providers and individuals focus on health risks and health status changes and consequently improve the quality of life.展开更多
In March 2020,the World Health Organization declared the coronavirus disease(COVID-19)outbreak as a pandemic due to its uncontrolled global spread.Reverse transcription polymerase chain reaction is a laboratory test t...In March 2020,the World Health Organization declared the coronavirus disease(COVID-19)outbreak as a pandemic due to its uncontrolled global spread.Reverse transcription polymerase chain reaction is a laboratory test that is widely used for the diagnosis of this deadly disease.However,the limited availability of testing kits and qualified staff and the drastically increasing number of cases have hampered massive testing.To handle COVID19 testing problems,we apply the Internet of Things and artificial intelligence to achieve self-adaptive,secure,and fast resource allocation,real-time tracking,remote screening,and patient monitoring.In addition,we implement a cloud platform for efficient spectrum utilization.Thus,we propose a cloudbased intelligent system for remote COVID-19 screening using cognitiveradio-based Internet of Things and deep learning.Specifically,a deep learning technique recognizes radiographic patterns in chest computed tomography(CT)scans.To this end,contrast-limited adaptive histogram equalization is applied to an input CT scan followed by bilateral filtering to enhance the spatial quality.The image quality assessment of the CT scan is performed using the blind/referenceless image spatial quality evaluator.Then,a deep transfer learning model,VGG-16,is trained to diagnose a suspected CT scan as either COVID-19 positive or negative.Experimental results demonstrate that the proposed VGG-16 model outperforms existing COVID-19 screening models regarding accuracy,sensitivity,and specificity.The results obtained from the proposed system can be verified by doctors and sent to remote places through the Internet.展开更多
Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of...Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.展开更多
Background—Human Gait Recognition(HGR)is an approach based on biometric and is being widely used for surveillance.HGR is adopted by researchers for the past several decades.Several factors are there that affect the s...Background—Human Gait Recognition(HGR)is an approach based on biometric and is being widely used for surveillance.HGR is adopted by researchers for the past several decades.Several factors are there that affect the system performance such as the walking variation due to clothes,a person carrying some luggage,variations in the view angle.Proposed—In this work,a new method is introduced to overcome different problems of HGR.A hybrid method is proposed or efficient HGR using deep learning and selection of best features.Four major steps are involved in this work-preprocessing of the video frames,manipulation of the pre-trained CNN model VGG-16 for the computation of the features,removing redundant features extracted from the CNN model,and classification.In the reduction of irrelevant features Principal Score and Kurtosis based approach is proposed named PSbK.After that,the features of PSbK are fused in one materix.Finally,this fused vector is fed to the One against All Multi Support Vector Machine(OAMSVM)classifier for the final results.Results—The system is evaluated by utilizing the CASIA B database and six angles 00◦,18◦,36◦,54◦,72◦,and 90◦are used and attained the accuracy of 95.80%,96.0%,95.90%,96.20%,95.60%,and 95.50%,respectively.Conclusion—The comparison with recent methods show the proposed method work better.展开更多
基金supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI)funded by the Ministry of Health&Welfare(HI18C1216)+1 种基金the grant of the National Research Foundation of Korea(NRF-2020R1I1A1A01074256)the Soonchunhyang University Research Fund.
文摘Recently,Internet of Medical Things(IoMT)has gained considerable attention to provide improved healthcare services to patients.Since earlier diag-nosis of brain tumor(BT)using medical imaging becomes an essential task,auto-mated IoMT and cloud enabled BT diagnosis model can be devised using recent deep learning models.With this motivation,this paper introduces a novel IoMT and cloud enabled BT diagnosis model,named IoMTC-HDBT.The IoMTC-HDBT model comprises the data acquisition process by the use of IoMT devices which captures the magnetic resonance imaging(MRI)brain images and transmit them to the cloud server.Besides,adaptive windowfiltering(AWF)based image preprocessing is used to remove noise.In addition,the cloud server executes the disease diagnosis model which includes the sparrow search algorithm(SSA)with GoogleNet(SSA-GN)model.The IoMTC-HDBT model applies functional link neural network(FLNN),which has the ability to detect and classify the MRI brain images as normal or abnormal.Itfinds useful to generate the reports instantly for patients located in remote areas.The validation of the IoMTC-HDBT model takes place against BRATS2015 Challenge dataset and the experimental analysis is car-ried out interms of sensitivity,accuracy,and specificity.The experimentation out-come pointed out the betterment of the proposed model with the accuracy of 0.984.
基金This study was financially supported by the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),the Ministry of Health and Welfare(HI18C1216),and the Soonchunhyang University Research Fund.
文摘Big data applications in healthcare have provided a variety of solutions to reduce costs,errors,and waste.This work aims to develop a real-time system based on big medical data processing in the cloud for the prediction of health issues.In the proposed scalable system,medical parameters are sent to Apache Spark to extract attributes from data and apply the proposed machine learning algorithm.In this way,healthcare risks can be predicted and sent as alerts and recommendations to users and healthcare providers.The proposed work also aims to provide an effective recommendation system by using streaming medical data,historical data on a user’s profile,and a knowledge database to make themost appropriate real-time recommendations and alerts based on the sensor’s measurements.This proposed scalable system works by tweeting the health status attributes of users.Their cloud profile receives the streaming healthcare data in real time by extracting the health attributes via a machine learning prediction algorithm to predict the users’health status.Subsequently,their status can be sent on demand to healthcare providers.Therefore,machine learning algorithms can be applied to stream health care data from wearables and provide users with insights into their health status.These algorithms can help healthcare providers and individuals focus on health risks and health status changes and consequently improve the quality of life.
基金This study was supported by the grant of the National Research Foundation of Korea(NRF 2016M3A9E9942010)the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI)+1 种基金funded by the Ministry of Health&Welfare(HI18C1216)the Soonchunhyang University Research Fund.
文摘In March 2020,the World Health Organization declared the coronavirus disease(COVID-19)outbreak as a pandemic due to its uncontrolled global spread.Reverse transcription polymerase chain reaction is a laboratory test that is widely used for the diagnosis of this deadly disease.However,the limited availability of testing kits and qualified staff and the drastically increasing number of cases have hampered massive testing.To handle COVID19 testing problems,we apply the Internet of Things and artificial intelligence to achieve self-adaptive,secure,and fast resource allocation,real-time tracking,remote screening,and patient monitoring.In addition,we implement a cloud platform for efficient spectrum utilization.Thus,we propose a cloudbased intelligent system for remote COVID-19 screening using cognitiveradio-based Internet of Things and deep learning.Specifically,a deep learning technique recognizes radiographic patterns in chest computed tomography(CT)scans.To this end,contrast-limited adaptive histogram equalization is applied to an input CT scan followed by bilateral filtering to enhance the spatial quality.The image quality assessment of the CT scan is performed using the blind/referenceless image spatial quality evaluator.Then,a deep transfer learning model,VGG-16,is trained to diagnose a suspected CT scan as either COVID-19 positive or negative.Experimental results demonstrate that the proposed VGG-16 model outperforms existing COVID-19 screening models regarding accuracy,sensitivity,and specificity.The results obtained from the proposed system can be verified by doctors and sent to remote places through the Internet.
基金This study was supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare(HI18C1216)the grant of the National Research Foundation of Korea(NRF-2020R1I1A1A01074256)the Soonchunhyang University Research Fund.
文摘Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.
基金This study was supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare(HI18C1216)and the Soonchunhyang University Research Fund.
文摘Background—Human Gait Recognition(HGR)is an approach based on biometric and is being widely used for surveillance.HGR is adopted by researchers for the past several decades.Several factors are there that affect the system performance such as the walking variation due to clothes,a person carrying some luggage,variations in the view angle.Proposed—In this work,a new method is introduced to overcome different problems of HGR.A hybrid method is proposed or efficient HGR using deep learning and selection of best features.Four major steps are involved in this work-preprocessing of the video frames,manipulation of the pre-trained CNN model VGG-16 for the computation of the features,removing redundant features extracted from the CNN model,and classification.In the reduction of irrelevant features Principal Score and Kurtosis based approach is proposed named PSbK.After that,the features of PSbK are fused in one materix.Finally,this fused vector is fed to the One against All Multi Support Vector Machine(OAMSVM)classifier for the final results.Results—The system is evaluated by utilizing the CASIA B database and six angles 00◦,18◦,36◦,54◦,72◦,and 90◦are used and attained the accuracy of 95.80%,96.0%,95.90%,96.20%,95.60%,and 95.50%,respectively.Conclusion—The comparison with recent methods show the proposed method work better.