RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still...RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published.展开更多
Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study intr...Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.展开更多
Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recogn...Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recognizing activities for specific users, which does not consider the individual differences among users and cannot adapt to new users. In order to improve the generalization ability of HAR model, this paper proposes a novel method that combines the theories in transfer learning and active learning to mitigate the cross-subject issue, so that it can enable lower limb exoskeleton robots being used in more complex scenarios. First, a neural network based on convolutional neural networks (CNN) is designed, which can extract temporal and spatial features from sensor signals collected from different parts of human body. It can recognize human activities with high accuracy after trained by labeled data. Second, in order to improve the cross-subject adaptation ability of the pre-trained model, we design a cross-subject HAR algorithm based on sparse interrogation and label propagation. Through leave-one-subject-out validation on two widely-used public datasets with existing methods, our method achieves average accuracies of 91.77% on DSAD and 80.97% on PAMAP2, respectively. The experimental results demonstrate the potential of implementing cross-subject HAR for lower limb exoskeleton robots.展开更多
Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction...Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction,robot vision,etc.Though considerable improvements have been made in recent days,design of an effective and accurate action recognition model is yet a difficult process owing to the existence of different obstacles such as variations in camera angle,occlusion,background,movement speed,and so on.From the literature,it is observed that hard to deal with the temporal dimension in the action recognition process.Convolutional neural network(CNN)models could be used widely to solve this.With this motivation,this study designs a novel key point extraction with deep convolutional neural networks based pose estimation(KPE-DCNN)model for activity recognition.The KPE-DCNN technique initially converts the input video into a sequence of frames followed by a three stage process namely key point extraction,hyperparameter tuning,and pose estimation.In the keypoint extraction process an OpenPose model is designed to compute the accurate key-points in the human pose.Then,an optimal DCNN model is developed to classify the human activities label based on the extracted key points.For improving the training process of the DCNN technique,RMSProp optimizer is used to optimally adjust the hyperparameters such as learning rate,batch size,and epoch count.The experimental results tested using benchmark dataset like UCF sports dataset showed that KPE-DCNN technique is able to achieve good results compared with benchmark algorithms like CNN,DBN,SVM,STAL,T-CNN and so on.展开更多
In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health infor...In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.展开更多
Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,...Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,healthcare,human-robot interaction,and entertainment.Since wearable sensor-based HAR system includes in-built sensors,human activities can be categorized based on sensor values.Further,it can also be employed in other applications such as gait diagnosis,observation of children/adult’s cognitive nature,stroke-patient hospital direction,Epilepsy and Parkinson’s disease examination,etc.Recently-developed Artificial Intelligence(AI)techniques,especially Deep Learning(DL)models can be deployed to accomplish effective outcomes on HAR process.With this motivation,the current research paper focuses on designing Intelligent Hyperparameter Tuned Deep Learning-based HAR(IHPTDL-HAR)technique in healthcare environment.The proposed IHPTDL-HAR technique aims at recogniz-ing the human actions in healthcare environment and helps the patients in mana-ging their healthcare service.In addition,the presented model makes use of Hierarchical Clustering(HC)-based outlier detection technique to remove the out-liers.IHPTDL-HAR technique incorporates DL-based Deep Belief Network(DBN)model to recognize the activities of users.Moreover,Harris Hawks Opti-mization(HHO)algorithm is used for hyperparameter tuning of DBN model.Finally,a comprehensive experimental analysis was conducted upon benchmark dataset and the results were examined under different aspects.The experimental results demonstrate that the proposed IHPTDL-HAR technique is a superior per-former compared to other recent techniques under different measures.展开更多
The purpose of Human Activities Recognition(HAR)is to recognize human activities with sensors like accelerometers and gyroscopes.The normal research strategy is to obtain better HAR results by finding more efficient e...The purpose of Human Activities Recognition(HAR)is to recognize human activities with sensors like accelerometers and gyroscopes.The normal research strategy is to obtain better HAR results by finding more efficient eigenvalues and classification algorithms.In this paper,we experimentally validate the HAR process and its various algorithms independently.On the base of which,it is further proposed that,in addition to the necessary eigenvalues and intelligent algorithms,correct prior knowledge is even more critical.The prior knowledge mentioned here mainly refers to the physical understanding of the analyzed object,the sampling process,the sampling data,the HAR algorithm,etc.Thus,a solution is presented under the guidance of right prior knowledge,using Back-Propagation neural networks(BP networks)and simple Convolutional Neural Networks(CNN).The results show that HAR can be achieved with 90%–100%accuracy.Further analysis shows that intelligent algorithms for pattern recognition and classification problems,typically represented by HAR,require correct prior knowledge to work effectively.展开更多
Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments...Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments and anthropometric differences between individuals make it harder to recognize actions.This study focused on human activity in video sequences acquired with an RGB camera because of its vast range of real-world applications.It uses two-stream ConvNet to extract spatial and temporal information and proposes a fine-tuned deep neural network.Moreover,the transfer learning paradigm is adopted to extract varied and fixed frames while reusing object identification information.Six state-of-the-art pre-trained models are exploited to find the best model for spatial feature extraction.For temporal sequence,this study uses dense optical flow following the two-stream ConvNet and Bidirectional Long Short TermMemory(BiLSTM)to capture longtermdependencies.Two state-of-the-art datasets,UCF101 and HMDB51,are used for evaluation purposes.In addition,seven state-of-the-art optimizers are used to fine-tune the proposed network parameters.Furthermore,this study utilizes an ensemble mechanism to aggregate spatial-temporal features using a four-stream Convolutional Neural Network(CNN),where two streams use RGB data.In contrast,the other uses optical flow images.Finally,the proposed ensemble approach using max hard voting outperforms state-ofthe-art methods with 96.30%and 90.07%accuracies on the UCF101 and HMDB51 datasets.展开更多
Traditional indoor human activity recognition(HAR)is a timeseries data classification problem and needs feature extraction.Presently,considerable attention has been given to the domain ofHARdue to the enormous amount ...Traditional indoor human activity recognition(HAR)is a timeseries data classification problem and needs feature extraction.Presently,considerable attention has been given to the domain ofHARdue to the enormous amount of its real-time uses in real-time applications,namely surveillance by authorities,biometric user identification,and health monitoring of older people.The extensive usage of the Internet of Things(IoT)and wearable sensor devices has made the topic of HAR a vital subject in ubiquitous and mobile computing.The more commonly utilized inference and problemsolving technique in the HAR system have recently been deep learning(DL).The study develops aModifiedWild Horse Optimization withDLAided Symmetric Human Activity Recognition(MWHODL-SHAR)model.The major intention of the MWHODL-SHAR model lies in recognition of symmetric activities,namely jogging,walking,standing,sitting,etc.In the presented MWHODL-SHAR technique,the human activities data is pre-processed in various stages to make it compatible for further processing.A convolution neural network with an attention-based long short-term memory(CNNALSTM)model is applied for activity recognition.The MWHO algorithm is utilized as a hyperparameter tuning strategy to improve the detection rate of the CNN-ALSTM algorithm.The experimental validation of the MWHODL-SHAR technique is simulated using a benchmark dataset.An extensive comparison study revealed the betterment of theMWHODL-SHAR technique over other recent approaches.展开更多
With the rapid advancement of wearable devices,Human Activities Recognition(HAR)based on these devices has emerged as a prominent research field.The objective of this study is to enhance the recognition performance of...With the rapid advancement of wearable devices,Human Activities Recognition(HAR)based on these devices has emerged as a prominent research field.The objective of this study is to enhance the recognition performance of HAR by proposing an LSTM-1DCNN recognition algorithm that utilizes a single triaxial accelerometer.This algorithm comprises two branches:one branch consists of a Long and Short-Term Memory Network(LSTM),while the other parallel branch incorporates a one-dimensional Convolutional Neural Network(1DCNN).The parallel architecture of LSTM-1DCNN initially extracts spatial and temporal features from the accelerometer data separately,which are then concatenated and fed into a fully connected neural network for information fusion.In the LSTM-1DCNN architecture,the 1DCNN branch primarily focuses on extracting spatial features during convolution operations,whereas the LSTM branch mainly captures temporal features.Nine sets of accelerometer data from five publicly available HAR datasets are employed for training and evaluation purposes.The performance of the proposed LSTM-1DCNN model is compared with five other HAR algorithms including Decision Tree,Random Forest,Support Vector Machine,1DCNN,and LSTM on these five public datasets.Experimental results demonstrate that the F1-score achieved by the proposed LSTM-1DCNN ranges from 90.36%to 99.68%,with a mean value of 96.22%and standard deviation of 0.03 across all evaluated metrics on these five public datasets-outperforming other existing HAR algorithms significantly in terms of evaluation metrics used in this study.Finally the proposed LSTM-1DCNN is validated in real-world applications by collecting acceleration data of seven human activities for training and testing purposes.Subsequently,the trained HAR algorithm is deployed on Android phones to evaluate its performance.Experimental results demonstrate that the proposed LSTM-1DCNN algorithm achieves an impressive F1-score of 97.67%on our self-built dataset.In conclusion,the fusion of temporal and spatial information in the measured data contributes to the excellent HAR performance and robustness exhibited by the proposed 1DCNN-LSTM architecture.展开更多
Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for com...Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for computer systems.HCI has accomplished effective incorporation of the human factors and software engineering of computing systems through the methods and concepts of cognitive science.Usability is an aspect of HCI dedicated to guar-anteeing that human–computer communication is,amongst other things,efficient,effective,and sustaining for the user.Simultaneously,Human activity recognition(HAR)aim is to identify actions from a sequence of observations on the activities of subjects and the environmental conditions.The vision-based HAR study is the basis of several applications involving health care,HCI,and video surveillance.This article develops a Fire Hawk Optimizer with Deep Learning Enabled Activ-ity Recognition(FHODL-AR)on HCI driven usability.In the presented FHODL-AR technique,the input images are investigated for the identification of different human activities.For feature extraction,a modified SqueezeNet model is intro-duced by the inclusion of few bypass connections to the SqueezeNet among Fire modules.Besides,the FHO algorithm is utilized as a hyperparameter optimization algorithm,which in turn boosts the classification performance.To detect and cate-gorize different kinds of activities,probabilistic neural network(PNN)classifier is applied.The experimental validation of the FHODL-AR technique is tested using benchmark datasets,and the outcomes reported the improvements of the FHODL-AR technique over other recent approaches.展开更多
Recognition of human activity is one of the most exciting aspects of time-series classification,with substantial practical and theoretical impli-cations.Recent evidence indicates that activity recognition from wearabl...Recognition of human activity is one of the most exciting aspects of time-series classification,with substantial practical and theoretical impli-cations.Recent evidence indicates that activity recognition from wearable sensors is an effective technique for tracking elderly adults and children in indoor and outdoor environments.Consequently,researchers have demon-strated considerable passion for developing cutting-edge deep learning sys-tems capable of exploiting unprocessed sensor data from wearable devices and generating practical decision assistance in many contexts.This study provides a deep learning-based approach for recognizing indoor and outdoor movement utilizing an enhanced deep pyramidal residual model called Sen-PyramidNet and motion information from wearable sensors(accelerometer and gyroscope).The suggested technique develops a residual unit based on a deep pyramidal residual network and introduces the concept of a pyramidal residual unit to increase detection capability.The proposed deep learning-based model was assessed using the publicly available 19Nonsens dataset,which gathered motion signals from various indoor and outdoor activities,including practicing various body parts.The experimental findings demon-strate that the proposed approach can efficiently reuse characteristics and has achieved an identification accuracy of 96.37%for indoor and 97.25%for outdoor activity.Moreover,comparison experiments demonstrate that the SenPyramidNet surpasses other cutting-edge deep learning models in terms of accuracy and F1-score.Furthermore,this study explores the influence of several wearable sensors on indoor and outdoor action recognition ability.展开更多
With the improvement of people’s living standards,the demand for health monitoring and exercise detection is increasing.It is of great significance to study human activity recognition(HAR)methods that are different f...With the improvement of people’s living standards,the demand for health monitoring and exercise detection is increasing.It is of great significance to study human activity recognition(HAR)methods that are different from traditional feature extraction methods.This article uses convolutional neural network(CNN)algorithms in deep learning to automatically extract features of activities related to human life.We used a stochastic gradient descent algorithm to optimize the parameters of the CNN.The trained network model is compressed on STM32CubeMX-AI.Finally,this article introduces the use of neural networks on embedded devices to recognize six human activities of daily life,such as sitting,standing,walking,jogging,upstairs,and downstairs.The acceleration sensor related to human activity information is used to obtain the relevant characteristics of the activity,thereby solving the HAR problem.By drawing the accuracy curve,loss function curve,and confusion matrix diagram of the training model,the recognition effect of the convolutional neural network can be seen more intuitively.After comparing the average accuracy of each set of experiments and the test set of the best model obtained from it,the best model is then selected.展开更多
With the emerging of sensor networks, research on sensor-based activity recognition has attracted much attention. Many existing methods cannot well deal with the cases that contain hundreds of sensors and their recogn...With the emerging of sensor networks, research on sensor-based activity recognition has attracted much attention. Many existing methods cannot well deal with the cases that contain hundreds of sensors and their recognition accuracy is requisite to be further improved. A novel framework for recognizing human activities in smart home was presented. First, small, easy-to-install, and low-cost state change sensors were adopted for recording state change or use of the objects. Then the Bayesian belief network (BBN) was applied to conducting activity recognition by modeling statistical dependencies between sensor data and human activity. An edge-encode genetic algorithm (EEGA) approach was proposed to resolve the difficulties in structure learning of the BBN model under a high dimension space and large data set. Finally, some experiments were made using one publicly available dataset. The experimental results show that the EEGA algorithm is effective and efficient in learning the BBN structure and outperforms the conventional approaches. By conducting human activity recognition based on the testing samples, the BBN is effective to conduct human activity recognition and outperforms the naive Bayesian network (NBN) and multiclass naive Bayes classifier (MNBC).展开更多
Human pose estimation(HPE)is a procedure for determining the structure of the body pose and it is considered a challenging issue in the computer vision(CV)communities.HPE finds its applications in several fields namel...Human pose estimation(HPE)is a procedure for determining the structure of the body pose and it is considered a challenging issue in the computer vision(CV)communities.HPE finds its applications in several fields namely activity recognition and human-computer interface.Despite the benefits of HPE,it is still a challenging process due to the variations in visual appearances,lighting,occlusions,dimensionality,etc.To resolve these issues,this paper presents a squirrel search optimization with a deep convolutional neural network for HPE(SSDCNN-HPE)technique.The major intention of the SSDCNN-HPE technique is to identify the human pose accurately and efficiently.Primarily,the video frame conversion process is performed and pre-processing takes place via bilateral filtering-based noise removal process.Then,the EfficientNet model is applied to identify the body points of a person with no problem constraints.Besides,the hyperparameter tuning of the EfficientNet model takes place by the use of the squirrel search algorithm(SSA).In the final stage,the multiclass support vector machine(M-SVM)technique was utilized for the identification and classification of human poses.The design of bilateral filtering followed by SSA based EfficientNetmodel for HPE depicts the novelty of the work.To demonstrate the enhanced outcomes of the SSDCNN-HPE approach,a series of simulations are executed.The experimental results reported the betterment of the SSDCNN-HPE system over the recent existing techniques in terms of different measures.展开更多
Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them.Deep learning has gained momentum for identifying activities through sensors,s...Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them.Deep learning has gained momentum for identifying activities through sensors,smartphones or even surveillance cameras.However,it is often difficult to train deep learning models on constrained IoT devices.The focus of this paper is to propose an alternative model by constructing a Deep Learning-based Human Activity Recognition framework for edge computing,which we call DL-HAR.The goal of this framework is to exploit the capabilities of cloud computing to train a deep learning model and deploy it on less-powerful edge devices for recognition.The idea is to conduct the training of the model in the Cloud and distribute it to the edge nodes.We demonstrate how the DL-HAR can perform human activity recognition at the edge while improving efficiency and accuracy.In order to evaluate the proposed framework,we conducted a comprehensive set of experiments to validate the applicability of DL-HAR.Experimental results on the benchmark dataset show a significant increase in performance compared with the state-of-the-art models.展开更多
This paper proposes a hybrid approach for recognizing human activities from trajectories. First, an improved hidden Markov model (HMM) parameter learning algorithm, HMM-PSO, is proposed, which achieves a better bala...This paper proposes a hybrid approach for recognizing human activities from trajectories. First, an improved hidden Markov model (HMM) parameter learning algorithm, HMM-PSO, is proposed, which achieves a better balance between the global and local exploitation by the nonlinear update strategy and repulsion operation. Then, the event probability sequence (EPS) which consists of a series of events is computed to describe the unique characteristic of human activities. The anatysis on EPS indicates that it is robust to the changes in viewing direction and contributes to improving the recognition rate. Finally, the effectiveness of the proposed approach is evaluated by data experiments on current popular datasets.展开更多
Elderly or disabled people can be supported by a human activity recognition(HAR)system that monitors their activity intervenes and pat-terns in case of changes in their behaviors or critical events have occurred.An au...Elderly or disabled people can be supported by a human activity recognition(HAR)system that monitors their activity intervenes and pat-terns in case of changes in their behaviors or critical events have occurred.An automated HAR could assist these persons to have a more indepen-dent life.Providing appropriate and accurate data regarding the activity is the most crucial computation task in the activity recognition system.With the fast development of neural networks,computing,and machine learning algorithms,HAR system based on wearable sensors has gained popularity in several areas,such as medical services,smart homes,improving human communication with computers,security systems,healthcare for the elderly,mechanization in industry,robot monitoring system,monitoring athlete train-ing,and rehabilitation systems.In this view,this study develops an improved pelican optimization with deep transfer learning enabled HAR(IPODTL-HAR)system for disabled persons.The major goal of the IPODTL-HAR method was recognizing the human activities for disabled person and improve the quality of living.The presented IPODTL-HAR model follows data pre-processing for improvising the quality of the data.Besides,EfficientNet model is applied to derive a useful set of feature vectors and the hyperparameters are adjusted by the use of Nadam optimizer.Finally,the IPO with deep belief network(DBN)model is utilized for the recognition and classification of human activities.The utilization of Nadam optimizer and IPO algorithm helps in effectually tuning the hyperparameters related to the EfficientNet and DBN models respectively.The experimental validation of the IPODTL-HAR method is tested using benchmark dataset.Extensive comparison study highlighted the betterment of the IPODTL-HAR model over recent state of art HAR approaches interms of different measures.展开更多
Accidents are still an issue in an intelligent transportation system,despite developments in self-driving technology(ITS).Drivers who engage in risky behavior account for more than half of all road accidents.As a resu...Accidents are still an issue in an intelligent transportation system,despite developments in self-driving technology(ITS).Drivers who engage in risky behavior account for more than half of all road accidents.As a result,reckless driving behaviour can cause congestion and delays.Computer vision and multimodal sensors have been used to study driving behaviour categorization to lessen this problem.Previous research has also collected and analyzed a wide range of data,including electroencephalography(EEG),electrooculography(EOG),and photographs of the driver’s face.On the other hand,driving a car is a complicated action that requires a wide range of body move-ments.In this work,we proposed a ResNet-SE model,an efficient deep learning classifier for driving activity clas-sification based on signal data obtained in real-world traffic conditions using smart glasses.End-to-end learning can be achieved by combining residual networks and channel attention approaches into a single learning model.Sensor data from 3-point EOG electrodes,tri-axial accelerometer,and tri-axial gyroscope from the Smart Glasses dataset was utilized in this study.We performed various experiments and compared the proposed model to base-line deep learning algorithms(CNNs and LSTMs)to demonstrate its performance.According to the research results,the proposed model outperforms the previous deep learning models in this domain with an accuracy of 99.17%and an F1-score of 98.96%.展开更多
Inpatient falls from beds in hospitals are a common problem.Such falls may result in severe injuries.This problem can be addressed by continuous monitoring of patients using cameras.Recent advancements in deep learnin...Inpatient falls from beds in hospitals are a common problem.Such falls may result in severe injuries.This problem can be addressed by continuous monitoring of patients using cameras.Recent advancements in deep learning-based video analytics have made this task of fall detection more effective and efficient.Along with fall detection,monitoring of different activities of the patients is also of significant concern to assess the improvement in their health.High computation-intensive models are required to monitor every action of the patient precisely.This requirement limits the applicability of such networks.Hence,to keep the model lightweight,the already designed fall detection networks can be extended to monitor the general activities of the patients along with the fall detection.Motivated by the same notion,we propose a novel,lightweight,and efficient patient activity monitoring system that broadly classifies the patients’activities into fall,activity,and rest classes based on their poses.The whole network comprises three sub-networks,namely a Convolutional Neural Networks(CNN)based video compression network,a Lightweight Pose Network(LPN)and a Residual Network(ResNet)Mixer block-based activity recognition network.The compression network compresses the video streams using deep learning networks for efficient storage and retrieval;after that,LPN estimates human poses.Finally,the activity recognition network classifies the patients’activities based on their poses.The proposed system shows an overall accuracy of approx.99.7% over a standard dataset with 99.63% fall detection accuracy and efficiently monitors different events,which may help monitor the falls and improve the inpatients’health.展开更多
基金the Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDC02040300)for this study.
文摘RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published.
基金funded by the National Science and Technology Council,Taiwan(Grant No.NSTC 112-2121-M-039-001)by China Medical University(Grant No.CMU112-MF-79).
文摘Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.
文摘Human Activity Recognition (HAR) is an important way for lower limb exoskeleton robots to implement human-computer collaboration with users. Most of the existing methods in this field focus on a simple scenario recognizing activities for specific users, which does not consider the individual differences among users and cannot adapt to new users. In order to improve the generalization ability of HAR model, this paper proposes a novel method that combines the theories in transfer learning and active learning to mitigate the cross-subject issue, so that it can enable lower limb exoskeleton robots being used in more complex scenarios. First, a neural network based on convolutional neural networks (CNN) is designed, which can extract temporal and spatial features from sensor signals collected from different parts of human body. It can recognize human activities with high accuracy after trained by labeled data. Second, in order to improve the cross-subject adaptation ability of the pre-trained model, we design a cross-subject HAR algorithm based on sparse interrogation and label propagation. Through leave-one-subject-out validation on two widely-used public datasets with existing methods, our method achieves average accuracies of 91.77% on DSAD and 80.97% on PAMAP2, respectively. The experimental results demonstrate the potential of implementing cross-subject HAR for lower limb exoskeleton robots.
文摘Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction,robot vision,etc.Though considerable improvements have been made in recent days,design of an effective and accurate action recognition model is yet a difficult process owing to the existence of different obstacles such as variations in camera angle,occlusion,background,movement speed,and so on.From the literature,it is observed that hard to deal with the temporal dimension in the action recognition process.Convolutional neural network(CNN)models could be used widely to solve this.With this motivation,this study designs a novel key point extraction with deep convolutional neural networks based pose estimation(KPE-DCNN)model for activity recognition.The KPE-DCNN technique initially converts the input video into a sequence of frames followed by a three stage process namely key point extraction,hyperparameter tuning,and pose estimation.In the keypoint extraction process an OpenPose model is designed to compute the accurate key-points in the human pose.Then,an optimal DCNN model is developed to classify the human activities label based on the extracted key points.For improving the training process of the DCNN technique,RMSProp optimizer is used to optimally adjust the hyperparameters such as learning rate,batch size,and epoch count.The experimental results tested using benchmark dataset like UCF sports dataset showed that KPE-DCNN technique is able to achieve good results compared with benchmark algorithms like CNN,DBN,SVM,STAL,T-CNN and so on.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R194)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant fundedthe Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,healthcare,human-robot interaction,and entertainment.Since wearable sensor-based HAR system includes in-built sensors,human activities can be categorized based on sensor values.Further,it can also be employed in other applications such as gait diagnosis,observation of children/adult’s cognitive nature,stroke-patient hospital direction,Epilepsy and Parkinson’s disease examination,etc.Recently-developed Artificial Intelligence(AI)techniques,especially Deep Learning(DL)models can be deployed to accomplish effective outcomes on HAR process.With this motivation,the current research paper focuses on designing Intelligent Hyperparameter Tuned Deep Learning-based HAR(IHPTDL-HAR)technique in healthcare environment.The proposed IHPTDL-HAR technique aims at recogniz-ing the human actions in healthcare environment and helps the patients in mana-ging their healthcare service.In addition,the presented model makes use of Hierarchical Clustering(HC)-based outlier detection technique to remove the out-liers.IHPTDL-HAR technique incorporates DL-based Deep Belief Network(DBN)model to recognize the activities of users.Moreover,Harris Hawks Opti-mization(HHO)algorithm is used for hyperparameter tuning of DBN model.Finally,a comprehensive experimental analysis was conducted upon benchmark dataset and the results were examined under different aspects.The experimental results demonstrate that the proposed IHPTDL-HAR technique is a superior per-former compared to other recent techniques under different measures.
基金supported by the Guangxi University of Science and Technology,Liuzhou,China,sponsored by the Researchers Supporting Project(No.XiaoKeBo21Z27,The Construction of Electronic Information Team Supported by Artificial Intelligence Theory and ThreeDimensional Visual Technology,Yuesheng Zhao)supported by the Key Laboratory for Space-based Integrated Information Systems 2022 Laboratory Funding Program(No.SpaceInfoNet20221120,Research on the Key Technologies of Intelligent Spatio-Temporal Data Engine Based on Space-Based Information Network,Yuesheng Zhao)supported by the 2023 Guangxi University Young and Middle-Aged Teachers’Basic Scientific Research Ability Improvement Project(No.2023KY0352,Research on the Recognition of Psychological Abnormalities in College Students Based on the Fusion of Pulse and EEG Techniques,Yutong Lu).
文摘The purpose of Human Activities Recognition(HAR)is to recognize human activities with sensors like accelerometers and gyroscopes.The normal research strategy is to obtain better HAR results by finding more efficient eigenvalues and classification algorithms.In this paper,we experimentally validate the HAR process and its various algorithms independently.On the base of which,it is further proposed that,in addition to the necessary eigenvalues and intelligent algorithms,correct prior knowledge is even more critical.The prior knowledge mentioned here mainly refers to the physical understanding of the analyzed object,the sampling process,the sampling data,the HAR algorithm,etc.Thus,a solution is presented under the guidance of right prior knowledge,using Back-Propagation neural networks(BP networks)and simple Convolutional Neural Networks(CNN).The results show that HAR can be achieved with 90%–100%accuracy.Further analysis shows that intelligent algorithms for pattern recognition and classification problems,typically represented by HAR,require correct prior knowledge to work effectively.
基金This work was supported by financial support from Universiti Sains Malaysia(USM)under FRGS grant number FRGS/1/2020/TK03/USM/02/1the School of Computer Sciences USM for their support.
文摘Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments and anthropometric differences between individuals make it harder to recognize actions.This study focused on human activity in video sequences acquired with an RGB camera because of its vast range of real-world applications.It uses two-stream ConvNet to extract spatial and temporal information and proposes a fine-tuned deep neural network.Moreover,the transfer learning paradigm is adopted to extract varied and fixed frames while reusing object identification information.Six state-of-the-art pre-trained models are exploited to find the best model for spatial feature extraction.For temporal sequence,this study uses dense optical flow following the two-stream ConvNet and Bidirectional Long Short TermMemory(BiLSTM)to capture longtermdependencies.Two state-of-the-art datasets,UCF101 and HMDB51,are used for evaluation purposes.In addition,seven state-of-the-art optimizers are used to fine-tune the proposed network parameters.Furthermore,this study utilizes an ensemble mechanism to aggregate spatial-temporal features using a four-stream Convolutional Neural Network(CNN),where two streams use RGB data.In contrast,the other uses optical flow images.Finally,the proposed ensemble approach using max hard voting outperforms state-ofthe-art methods with 96.30%and 90.07%accuracies on the UCF101 and HMDB51 datasets.
文摘Traditional indoor human activity recognition(HAR)is a timeseries data classification problem and needs feature extraction.Presently,considerable attention has been given to the domain ofHARdue to the enormous amount of its real-time uses in real-time applications,namely surveillance by authorities,biometric user identification,and health monitoring of older people.The extensive usage of the Internet of Things(IoT)and wearable sensor devices has made the topic of HAR a vital subject in ubiquitous and mobile computing.The more commonly utilized inference and problemsolving technique in the HAR system have recently been deep learning(DL).The study develops aModifiedWild Horse Optimization withDLAided Symmetric Human Activity Recognition(MWHODL-SHAR)model.The major intention of the MWHODL-SHAR model lies in recognition of symmetric activities,namely jogging,walking,standing,sitting,etc.In the presented MWHODL-SHAR technique,the human activities data is pre-processed in various stages to make it compatible for further processing.A convolution neural network with an attention-based long short-term memory(CNNALSTM)model is applied for activity recognition.The MWHO algorithm is utilized as a hyperparameter tuning strategy to improve the detection rate of the CNN-ALSTM algorithm.The experimental validation of the MWHODL-SHAR technique is simulated using a benchmark dataset.An extensive comparison study revealed the betterment of theMWHODL-SHAR technique over other recent approaches.
基金supported by the Guangxi University of Science and Technology,Liuzhou,China,sponsored by the Researchers Supporting Project(No.XiaoKeBo21Z27,The Construction of Electronic Information Team supported by Artificial Intelligence Theory and Three-dimensional Visual Technology,Yuesheng Zhao)supported by the 2022 Laboratory Fund Project of the Key Laboratory of Space-Based Integrated Information System(No.SpaceInfoNet20221120,Research on the Key Technologies of Intelligent Spatiotemporal Data Engine Based on Space-Based Information Network,Yuesheng Zhao)supported by the 2023 Guangxi University Young and Middle-Aged Teachers’Basic Scientific Research Ability Improvement Project(No.2023KY0352,Research on the Recognition of Psychological Abnormalities in College Students Based on the Fusion of Pulse and EEG Techniques,Yutong Luo).
文摘With the rapid advancement of wearable devices,Human Activities Recognition(HAR)based on these devices has emerged as a prominent research field.The objective of this study is to enhance the recognition performance of HAR by proposing an LSTM-1DCNN recognition algorithm that utilizes a single triaxial accelerometer.This algorithm comprises two branches:one branch consists of a Long and Short-Term Memory Network(LSTM),while the other parallel branch incorporates a one-dimensional Convolutional Neural Network(1DCNN).The parallel architecture of LSTM-1DCNN initially extracts spatial and temporal features from the accelerometer data separately,which are then concatenated and fed into a fully connected neural network for information fusion.In the LSTM-1DCNN architecture,the 1DCNN branch primarily focuses on extracting spatial features during convolution operations,whereas the LSTM branch mainly captures temporal features.Nine sets of accelerometer data from five publicly available HAR datasets are employed for training and evaluation purposes.The performance of the proposed LSTM-1DCNN model is compared with five other HAR algorithms including Decision Tree,Random Forest,Support Vector Machine,1DCNN,and LSTM on these five public datasets.Experimental results demonstrate that the F1-score achieved by the proposed LSTM-1DCNN ranges from 90.36%to 99.68%,with a mean value of 96.22%and standard deviation of 0.03 across all evaluated metrics on these five public datasets-outperforming other existing HAR algorithms significantly in terms of evaluation metrics used in this study.Finally the proposed LSTM-1DCNN is validated in real-world applications by collecting acceleration data of seven human activities for training and testing purposes.Subsequently,the trained HAR algorithm is deployed on Android phones to evaluate its performance.Experimental results demonstrate that the proposed LSTM-1DCNN algorithm achieves an impressive F1-score of 97.67%on our self-built dataset.In conclusion,the fusion of temporal and spatial information in the measured data contributes to the excellent HAR performance and robustness exhibited by the proposed 1DCNN-LSTM architecture.
文摘Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for computer systems.HCI has accomplished effective incorporation of the human factors and software engineering of computing systems through the methods and concepts of cognitive science.Usability is an aspect of HCI dedicated to guar-anteeing that human–computer communication is,amongst other things,efficient,effective,and sustaining for the user.Simultaneously,Human activity recognition(HAR)aim is to identify actions from a sequence of observations on the activities of subjects and the environmental conditions.The vision-based HAR study is the basis of several applications involving health care,HCI,and video surveillance.This article develops a Fire Hawk Optimizer with Deep Learning Enabled Activ-ity Recognition(FHODL-AR)on HCI driven usability.In the presented FHODL-AR technique,the input images are investigated for the identification of different human activities.For feature extraction,a modified SqueezeNet model is intro-duced by the inclusion of few bypass connections to the SqueezeNet among Fire modules.Besides,the FHO algorithm is utilized as a hyperparameter optimization algorithm,which in turn boosts the classification performance.To detect and cate-gorize different kinds of activities,probabilistic neural network(PNN)classifier is applied.The experimental validation of the FHODL-AR technique is tested using benchmark datasets,and the outcomes reported the improvements of the FHODL-AR technique over other recent approaches.
基金supported by the Thailand Science Research and Innovation Fundthe University of Phayao(Grant No.FF66-UoE001)King Mongkut’s University of Technology North Bangkok,Contract No.KMUTNB-66-KNOW-05.
文摘Recognition of human activity is one of the most exciting aspects of time-series classification,with substantial practical and theoretical impli-cations.Recent evidence indicates that activity recognition from wearable sensors is an effective technique for tracking elderly adults and children in indoor and outdoor environments.Consequently,researchers have demon-strated considerable passion for developing cutting-edge deep learning sys-tems capable of exploiting unprocessed sensor data from wearable devices and generating practical decision assistance in many contexts.This study provides a deep learning-based approach for recognizing indoor and outdoor movement utilizing an enhanced deep pyramidal residual model called Sen-PyramidNet and motion information from wearable sensors(accelerometer and gyroscope).The suggested technique develops a residual unit based on a deep pyramidal residual network and introduces the concept of a pyramidal residual unit to increase detection capability.The proposed deep learning-based model was assessed using the publicly available 19Nonsens dataset,which gathered motion signals from various indoor and outdoor activities,including practicing various body parts.The experimental findings demon-strate that the proposed approach can efficiently reuse characteristics and has achieved an identification accuracy of 96.37%for indoor and 97.25%for outdoor activity.Moreover,comparison experiments demonstrate that the SenPyramidNet surpasses other cutting-edge deep learning models in terms of accuracy and F1-score.Furthermore,this study explores the influence of several wearable sensors on indoor and outdoor action recognition ability.
文摘With the improvement of people’s living standards,the demand for health monitoring and exercise detection is increasing.It is of great significance to study human activity recognition(HAR)methods that are different from traditional feature extraction methods.This article uses convolutional neural network(CNN)algorithms in deep learning to automatically extract features of activities related to human life.We used a stochastic gradient descent algorithm to optimize the parameters of the CNN.The trained network model is compressed on STM32CubeMX-AI.Finally,this article introduces the use of neural networks on embedded devices to recognize six human activities of daily life,such as sitting,standing,walking,jogging,upstairs,and downstairs.The acceleration sensor related to human activity information is used to obtain the relevant characteristics of the activity,thereby solving the HAR problem.By drawing the accuracy curve,loss function curve,and confusion matrix diagram of the training model,the recognition effect of the convolutional neural network can be seen more intuitively.After comparing the average accuracy of each set of experiments and the test set of the best model obtained from it,the best model is then selected.
基金National Natural Science Foundation of China(No. 70971021)
文摘With the emerging of sensor networks, research on sensor-based activity recognition has attracted much attention. Many existing methods cannot well deal with the cases that contain hundreds of sensors and their recognition accuracy is requisite to be further improved. A novel framework for recognizing human activities in smart home was presented. First, small, easy-to-install, and low-cost state change sensors were adopted for recording state change or use of the objects. Then the Bayesian belief network (BBN) was applied to conducting activity recognition by modeling statistical dependencies between sensor data and human activity. An edge-encode genetic algorithm (EEGA) approach was proposed to resolve the difficulties in structure learning of the BBN model under a high dimension space and large data set. Finally, some experiments were made using one publicly available dataset. The experimental results show that the EEGA algorithm is effective and efficient in learning the BBN structure and outperforms the conventional approaches. By conducting human activity recognition based on the testing samples, the BBN is effective to conduct human activity recognition and outperforms the naive Bayesian network (NBN) and multiclass naive Bayes classifier (MNBC).
文摘Human pose estimation(HPE)is a procedure for determining the structure of the body pose and it is considered a challenging issue in the computer vision(CV)communities.HPE finds its applications in several fields namely activity recognition and human-computer interface.Despite the benefits of HPE,it is still a challenging process due to the variations in visual appearances,lighting,occlusions,dimensionality,etc.To resolve these issues,this paper presents a squirrel search optimization with a deep convolutional neural network for HPE(SSDCNN-HPE)technique.The major intention of the SSDCNN-HPE technique is to identify the human pose accurately and efficiently.Primarily,the video frame conversion process is performed and pre-processing takes place via bilateral filtering-based noise removal process.Then,the EfficientNet model is applied to identify the body points of a person with no problem constraints.Besides,the hyperparameter tuning of the EfficientNet model takes place by the use of the squirrel search algorithm(SSA).In the final stage,the multiclass support vector machine(M-SVM)technique was utilized for the identification and classification of human poses.The design of bilateral filtering followed by SSA based EfficientNetmodel for HPE depicts the novelty of the work.To demonstrate the enhanced outcomes of the SSDCNN-HPE approach,a series of simulations are executed.The experimental results reported the betterment of the SSDCNN-HPE system over the recent existing techniques in terms of different measures.
文摘Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them.Deep learning has gained momentum for identifying activities through sensors,smartphones or even surveillance cameras.However,it is often difficult to train deep learning models on constrained IoT devices.The focus of this paper is to propose an alternative model by constructing a Deep Learning-based Human Activity Recognition framework for edge computing,which we call DL-HAR.The goal of this framework is to exploit the capabilities of cloud computing to train a deep learning model and deploy it on less-powerful edge devices for recognition.The idea is to conduct the training of the model in the Cloud and distribute it to the edge nodes.We demonstrate how the DL-HAR can perform human activity recognition at the edge while improving efficiency and accuracy.In order to evaluate the proposed framework,we conducted a comprehensive set of experiments to validate the applicability of DL-HAR.Experimental results on the benchmark dataset show a significant increase in performance compared with the state-of-the-art models.
基金supported by the National Natural Science Foundation of China(60573159)the Guangdong High Technique Project(201100000514)
文摘This paper proposes a hybrid approach for recognizing human activities from trajectories. First, an improved hidden Markov model (HMM) parameter learning algorithm, HMM-PSO, is proposed, which achieves a better balance between the global and local exploitation by the nonlinear update strategy and repulsion operation. Then, the event probability sequence (EPS) which consists of a series of events is computed to describe the unique characteristic of human activities. The anatysis on EPS indicates that it is robust to the changes in viewing direction and contributes to improving the recognition rate. Finally, the effectiveness of the proposed approach is evaluated by data experiments on current popular datasets.
文摘Elderly or disabled people can be supported by a human activity recognition(HAR)system that monitors their activity intervenes and pat-terns in case of changes in their behaviors or critical events have occurred.An automated HAR could assist these persons to have a more indepen-dent life.Providing appropriate and accurate data regarding the activity is the most crucial computation task in the activity recognition system.With the fast development of neural networks,computing,and machine learning algorithms,HAR system based on wearable sensors has gained popularity in several areas,such as medical services,smart homes,improving human communication with computers,security systems,healthcare for the elderly,mechanization in industry,robot monitoring system,monitoring athlete train-ing,and rehabilitation systems.In this view,this study develops an improved pelican optimization with deep transfer learning enabled HAR(IPODTL-HAR)system for disabled persons.The major goal of the IPODTL-HAR method was recognizing the human activities for disabled person and improve the quality of living.The presented IPODTL-HAR model follows data pre-processing for improvising the quality of the data.Besides,EfficientNet model is applied to derive a useful set of feature vectors and the hyperparameters are adjusted by the use of Nadam optimizer.Finally,the IPO with deep belief network(DBN)model is utilized for the recognition and classification of human activities.The utilization of Nadam optimizer and IPO algorithm helps in effectually tuning the hyperparameters related to the EfficientNet and DBN models respectively.The experimental validation of the IPODTL-HAR method is tested using benchmark dataset.Extensive comparison study highlighted the betterment of the IPODTL-HAR model over recent state of art HAR approaches interms of different measures.
基金support provided by Thammasat University Research fund under the TSRI,Contract Nos.TUFF19/2564 and TUFF24/2565for the project of“AI Ready City Networking in RUN”,based on the RUN Digital Cluster collaboration scheme.This research project was also supported by the Thailand Science Research and Innovation fund,the University of Phayao(Grant No.FF65-RIM041)supported by National Science,Research and Innovation(NSRF),and King Mongkut’s University of Technology North Bangkok,Contract No.KMUTNB-FF-66-07.
文摘Accidents are still an issue in an intelligent transportation system,despite developments in self-driving technology(ITS).Drivers who engage in risky behavior account for more than half of all road accidents.As a result,reckless driving behaviour can cause congestion and delays.Computer vision and multimodal sensors have been used to study driving behaviour categorization to lessen this problem.Previous research has also collected and analyzed a wide range of data,including electroencephalography(EEG),electrooculography(EOG),and photographs of the driver’s face.On the other hand,driving a car is a complicated action that requires a wide range of body move-ments.In this work,we proposed a ResNet-SE model,an efficient deep learning classifier for driving activity clas-sification based on signal data obtained in real-world traffic conditions using smart glasses.End-to-end learning can be achieved by combining residual networks and channel attention approaches into a single learning model.Sensor data from 3-point EOG electrodes,tri-axial accelerometer,and tri-axial gyroscope from the Smart Glasses dataset was utilized in this study.We performed various experiments and compared the proposed model to base-line deep learning algorithms(CNNs and LSTMs)to demonstrate its performance.According to the research results,the proposed model outperforms the previous deep learning models in this domain with an accuracy of 99.17%and an F1-score of 98.96%.
基金the Deanship of Scientific Research at Majmaah University for funding this work under Project No.R-2023-667.
文摘Inpatient falls from beds in hospitals are a common problem.Such falls may result in severe injuries.This problem can be addressed by continuous monitoring of patients using cameras.Recent advancements in deep learning-based video analytics have made this task of fall detection more effective and efficient.Along with fall detection,monitoring of different activities of the patients is also of significant concern to assess the improvement in their health.High computation-intensive models are required to monitor every action of the patient precisely.This requirement limits the applicability of such networks.Hence,to keep the model lightweight,the already designed fall detection networks can be extended to monitor the general activities of the patients along with the fall detection.Motivated by the same notion,we propose a novel,lightweight,and efficient patient activity monitoring system that broadly classifies the patients’activities into fall,activity,and rest classes based on their poses.The whole network comprises three sub-networks,namely a Convolutional Neural Networks(CNN)based video compression network,a Lightweight Pose Network(LPN)and a Residual Network(ResNet)Mixer block-based activity recognition network.The compression network compresses the video streams using deep learning networks for efficient storage and retrieval;after that,LPN estimates human poses.Finally,the activity recognition network classifies the patients’activities based on their poses.The proposed system shows an overall accuracy of approx.99.7% over a standard dataset with 99.63% fall detection accuracy and efficiently monitors different events,which may help monitor the falls and improve the inpatients’health.