Violence recognition is crucial because of its applications in activities related to security and law enforcement.Existing semi-automated systems have issues such as tedious manual surveillances,which causes human err...Violence recognition is crucial because of its applications in activities related to security and law enforcement.Existing semi-automated systems have issues such as tedious manual surveillances,which causes human errors and makes these systems less effective.Several approaches have been proposed using trajectory-based,non-object-centric,and deep-learning-based methods.Previous studies have shown that deep learning techniques attain higher accuracy and lower error rates than those of other methods.However,the their performance must be improved.This study explores the state-of-the-art deep learning architecture of convolutional neural networks(CNNs)and inception V4 to detect and recognize violence using video data.In the proposed framework,the keyframe extraction technique eliminates duplicate consecutive frames.This keyframing phase reduces the training data size and hence decreases the computational cost by avoiding duplicate frames.For feature selection and classification tasks,the applied sequential CNN uses one kernel size,whereas the inception v4 CNN uses multiple kernels for different layers of the architecture.For empirical analysis,four widely used standard datasets are used with diverse activities.The results confirm that the proposed approach attains 98%accuracy,reduces the computational cost,and outperforms the existing techniques of violence detection and recognition.展开更多
Human gait recognition(HGR)has received a lot of attention in the last decade as an alternative biometric technique.The main challenges in gait recognition are the change in in-person view angle and covariant factors....Human gait recognition(HGR)has received a lot of attention in the last decade as an alternative biometric technique.The main challenges in gait recognition are the change in in-person view angle and covariant factors.The major covariant factors are walking while carrying a bag and walking while wearing a coat.Deep learning is a new machine learning technique that is gaining popularity.Many techniques for HGR based on deep learning are presented in the literature.The requirement of an efficient framework is always required for correct and quick gait recognition.We proposed a fully automated deep learning and improved ant colony optimization(IACO)framework for HGR using video sequences in this work.The proposed framework consists of four primary steps.In the first step,the database is normalized in a video frame.In the second step,two pre-trained models named ResNet101 and InceptionV3 are selected andmodified according to the dataset’s nature.After that,we trained both modified models using transfer learning and extracted the features.The IACO algorithm is used to improve the extracted features.IACO is used to select the best features,which are then passed to the Cubic SVM for final classification.The cubic SVM employs a multiclass method.The experiment was carried out on three angles(0,18,and 180)of the CASIA B dataset,and the accuracy was 95.2,93.9,and 98.2 percent,respectively.A comparison with existing techniques is also performed,and the proposed method outperforms in terms of accuracy and computational time.展开更多
Sensors based Human Activity Recognition(HAR)have numerous applications in eHeath,sports,fitness assessments,ambient assisted living(AAL),human-computer interaction and many more.The human physical activity can be mon...Sensors based Human Activity Recognition(HAR)have numerous applications in eHeath,sports,fitness assessments,ambient assisted living(AAL),human-computer interaction and many more.The human physical activity can be monitored by using wearable sensors or external devices.The usage of external devices has disadvantages in terms of cost,hardware installation,storage,computational time and lighting conditions dependencies.Therefore,most of the researchers used smart devices like smart phones,smart bands and watches which contain various sensors like accelerometer,gyroscope,GPS etc.,and adequate processing capabilities.For the task of recognition,human activities can be broadly categorized as basic and complex human activities.Recognition of complex activities have received very less attention of researchers due to difficulty of problem by using either smart phones or smart watches.Other reasons include lack of sensor-based labeled dataset having several complex human daily life activities.Some of the researchers have worked on the smart phone’s inertial sensors to perform human activity recognition,whereas a few of them used both pocket and wrist positions.In this research,we have proposed a novel framework which is capable to recognize both basic and complex human activities using builtin-sensors of smart phone and smart watch.We have considered 25 physical activities,including 20 complex ones,using smart device’s built-in sensors.To the best of our knowledge,the existing literature consider only up to 15 activities of daily life.展开更多
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2018R1D1A1B07042967)the Soonchunhyang University Research Fund.
文摘Violence recognition is crucial because of its applications in activities related to security and law enforcement.Existing semi-automated systems have issues such as tedious manual surveillances,which causes human errors and makes these systems less effective.Several approaches have been proposed using trajectory-based,non-object-centric,and deep-learning-based methods.Previous studies have shown that deep learning techniques attain higher accuracy and lower error rates than those of other methods.However,the their performance must be improved.This study explores the state-of-the-art deep learning architecture of convolutional neural networks(CNNs)and inception V4 to detect and recognize violence using video data.In the proposed framework,the keyframe extraction technique eliminates duplicate consecutive frames.This keyframing phase reduces the training data size and hence decreases the computational cost by avoiding duplicate frames.For feature selection and classification tasks,the applied sequential CNN uses one kernel size,whereas the inception v4 CNN uses multiple kernels for different layers of the architecture.For empirical analysis,four widely used standard datasets are used with diverse activities.The results confirm that the proposed approach attains 98%accuracy,reduces the computational cost,and outperforms the existing techniques of violence detection and recognition.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2018R1D1A1B07042967)and the Soonchunhyang University Research Fund.
文摘Human gait recognition(HGR)has received a lot of attention in the last decade as an alternative biometric technique.The main challenges in gait recognition are the change in in-person view angle and covariant factors.The major covariant factors are walking while carrying a bag and walking while wearing a coat.Deep learning is a new machine learning technique that is gaining popularity.Many techniques for HGR based on deep learning are presented in the literature.The requirement of an efficient framework is always required for correct and quick gait recognition.We proposed a fully automated deep learning and improved ant colony optimization(IACO)framework for HGR using video sequences in this work.The proposed framework consists of four primary steps.In the first step,the database is normalized in a video frame.In the second step,two pre-trained models named ResNet101 and InceptionV3 are selected andmodified according to the dataset’s nature.After that,we trained both modified models using transfer learning and extracted the features.The IACO algorithm is used to improve the extracted features.IACO is used to select the best features,which are then passed to the Cubic SVM for final classification.The cubic SVM employs a multiclass method.The experiment was carried out on three angles(0,18,and 180)of the CASIA B dataset,and the accuracy was 95.2,93.9,and 98.2 percent,respectively.A comparison with existing techniques is also performed,and the proposed method outperforms in terms of accuracy and computational time.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2018R1D1A1B07042967)and the Soonchunhyang University Research Fund.
文摘Sensors based Human Activity Recognition(HAR)have numerous applications in eHeath,sports,fitness assessments,ambient assisted living(AAL),human-computer interaction and many more.The human physical activity can be monitored by using wearable sensors or external devices.The usage of external devices has disadvantages in terms of cost,hardware installation,storage,computational time and lighting conditions dependencies.Therefore,most of the researchers used smart devices like smart phones,smart bands and watches which contain various sensors like accelerometer,gyroscope,GPS etc.,and adequate processing capabilities.For the task of recognition,human activities can be broadly categorized as basic and complex human activities.Recognition of complex activities have received very less attention of researchers due to difficulty of problem by using either smart phones or smart watches.Other reasons include lack of sensor-based labeled dataset having several complex human daily life activities.Some of the researchers have worked on the smart phone’s inertial sensors to perform human activity recognition,whereas a few of them used both pocket and wrist positions.In this research,we have proposed a novel framework which is capable to recognize both basic and complex human activities using builtin-sensors of smart phone and smart watch.We have considered 25 physical activities,including 20 complex ones,using smart device’s built-in sensors.To the best of our knowledge,the existing literature consider only up to 15 activities of daily life.