Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their mov...Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture.展开更多
Many patients have begun to use mobile applications to handle different health needs because they can better access high-speed Internet and smartphones.These devices and mobile applications are now increasingly used a...Many patients have begun to use mobile applications to handle different health needs because they can better access high-speed Internet and smartphones.These devices and mobile applications are now increasingly used and integrated through the medical Internet of Things(mIoT).mIoT is an important part of the digital transformation of healthcare,because it can introduce new business models and allow efficiency improvements,cost control and improve patient experience.In the mIoT system,when migrating from traditional medical services to electronic medical services,patient protection and privacy are the priorities of each stakeholder.Therefore,it is recommended to use different user authentication and authorization methods to improve security and privacy.In this paper,our prosed model involves a shared identity verification process with different situations in the e-health system.We aim to reduce the strict and formal specification of the joint key authentication model.We use the AVISPA tool to verify through the wellknown HLPSL specification language to develop user authentication and smart card use cases in a user-friendly environment.Our model has economic and strategic advantages for healthcare organizations and healthcare workers.The medical staff can increase their knowledge and ability to analyze medical data more easily.Our model can continuously track health indicators to automatically manage treatments and monitor health data in real time.Further,it can help customers prevent chronic diseases with the enhanced cognitive functions support.The necessity for efficient identity verification in e-health care is even more crucial for cognitive mitigation because we increasingly rely on mIoT systems.展开更多
I. The parties held serious and productive discussions onthe actions each party will take in the initial phase for theimplementation of the Joint Statement of September 19,2005. The parties reaffirmed their common goa...I. The parties held serious and productive discussions onthe actions each party will take in the initial phase for theimplementation of the Joint Statement of September 19,2005. The parties reaffirmed their common goal and will展开更多
In this study,we explore a human activity recognition(HAR)system using computer vision for assisted living systems(ALS).Most existing HAR systems are implemented using wired or wireless sensor networks.These systems h...In this study,we explore a human activity recognition(HAR)system using computer vision for assisted living systems(ALS).Most existing HAR systems are implemented using wired or wireless sensor networks.These systems have limitations such as cost,power issues,weight,and the inability of the elderly to wear and carry them comfortably.These issues could be overcome by a computer vision based HAR system.But such systems require a highly memory-consuming image dataset.Training such a dataset takes a long time.The proposed computervision-based system overcomes the shortcomings of existing systems.The authors have used key-joint angles,distances between the key joints,and slopes between the key joints to create a numerical dataset instead of an image dataset.All these parameters in the dataset are recorded via real-time event simulation.The data set has 780,000 calculated feature values from 20,000 images.This dataset is used to train and detect five different human postures.These are sitting,standing,walking,lying,and falling.The implementation encompasses four distinct algorithms:the decision tree(DT),random forest(RF),support vector machine(SVM),and an ensemble approach.Remarkably,the ensemble technique exhibited exceptional performance metrics with 99%accuracy,98%precision,97%recall,and an F1 score of 99%.展开更多
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2023-00218176)and the Soonchunhyang University Research Fund.
文摘Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture.
基金This work was supported by Taif University(in Taif,Saudi Arabia)through the Researchers Supporting Project Number(TURSP-2020/150).
文摘Many patients have begun to use mobile applications to handle different health needs because they can better access high-speed Internet and smartphones.These devices and mobile applications are now increasingly used and integrated through the medical Internet of Things(mIoT).mIoT is an important part of the digital transformation of healthcare,because it can introduce new business models and allow efficiency improvements,cost control and improve patient experience.In the mIoT system,when migrating from traditional medical services to electronic medical services,patient protection and privacy are the priorities of each stakeholder.Therefore,it is recommended to use different user authentication and authorization methods to improve security and privacy.In this paper,our prosed model involves a shared identity verification process with different situations in the e-health system.We aim to reduce the strict and formal specification of the joint key authentication model.We use the AVISPA tool to verify through the wellknown HLPSL specification language to develop user authentication and smart card use cases in a user-friendly environment.Our model has economic and strategic advantages for healthcare organizations and healthcare workers.The medical staff can increase their knowledge and ability to analyze medical data more easily.Our model can continuously track health indicators to automatically manage treatments and monitor health data in real time.Further,it can help customers prevent chronic diseases with the enhanced cognitive functions support.The necessity for efficient identity verification in e-health care is even more crucial for cognitive mitigation because we increasingly rely on mIoT systems.
文摘I. The parties held serious and productive discussions onthe actions each party will take in the initial phase for theimplementation of the Joint Statement of September 19,2005. The parties reaffirmed their common goal and will
文摘In this study,we explore a human activity recognition(HAR)system using computer vision for assisted living systems(ALS).Most existing HAR systems are implemented using wired or wireless sensor networks.These systems have limitations such as cost,power issues,weight,and the inability of the elderly to wear and carry them comfortably.These issues could be overcome by a computer vision based HAR system.But such systems require a highly memory-consuming image dataset.Training such a dataset takes a long time.The proposed computervision-based system overcomes the shortcomings of existing systems.The authors have used key-joint angles,distances between the key joints,and slopes between the key joints to create a numerical dataset instead of an image dataset.All these parameters in the dataset are recorded via real-time event simulation.The data set has 780,000 calculated feature values from 20,000 images.This dataset is used to train and detect five different human postures.These are sitting,standing,walking,lying,and falling.The implementation encompasses four distinct algorithms:the decision tree(DT),random forest(RF),support vector machine(SVM),and an ensemble approach.Remarkably,the ensemble technique exhibited exceptional performance metrics with 99%accuracy,98%precision,97%recall,and an F1 score of 99%.