期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
TEAM:Transformer Encoder Attention Module for Video Classification
1
作者 Hae Sung Park Yong Suk Choi 《Computer Systems Science & Engineering》 2024年第2期451-477,共27页
Much like humans focus solely on object movement to understand actions,directing a deep learning model’s attention to the core contexts within videos is crucial for improving video comprehension.In the recent study,V... Much like humans focus solely on object movement to understand actions,directing a deep learning model’s attention to the core contexts within videos is crucial for improving video comprehension.In the recent study,Video Masked Auto-Encoder(VideoMAE)employs a pre-training approach with a high ratio of tube masking and reconstruction,effectively mitigating spatial bias due to temporal redundancy in full video frames.This steers the model’s focus toward detailed temporal contexts.However,as the VideoMAE still relies on full video frames during the action recognition stage,it may exhibit a progressive shift in attention towards spatial contexts,deteriorating its ability to capture the main spatio-temporal contexts.To address this issue,we propose an attention-directing module named Transformer Encoder Attention Module(TEAM).This proposed module effectively directs the model’s attention to the core characteristics within each video,inherently mitigating spatial bias.The TEAM first figures out the core features among the overall extracted features from each video.After that,it discerns the specific parts of the video where those features are located,encouraging the model to focus more on these informative parts.Consequently,during the action recognition stage,the proposed TEAM effectively shifts the VideoMAE’s attention from spatial contexts towards the core spatio-temporal contexts.This attention-shift manner alleviates the spatial bias in the model and simultaneously enhances its ability to capture precise video contexts.We conduct extensive experiments to explore the optimal configuration that enables the TEAM to fulfill its intended design purpose and facilitates its seamless integration with the VideoMAE framework.The integrated model,i.e.,VideoMAE+TEAM,outperforms the existing VideoMAE by a significant margin on Something-Something-V2(71.3%vs.70.3%).Moreover,the qualitative comparisons demonstrate that the TEAM encourages the model to disregard insignificant features and focus more on the essential video features,capturing more detailed spatio-temporal contexts within the video. 展开更多
关键词 video classification action recognition vision transformer masked auto-encoder
下载PDF
End-to-End Multiview Gesture Recognition for Autonomous Car Parking System
2
作者 Hassene Ben AMARA Fakhri KARRAY 《Instrumentation》 2019年第3期76-92,共17页
The use of hand gestures can be the most intuitive human-machine interaction medium.The early approaches for hand gesture recognition used device-based methods.These methods use mechanical or optical sensors attached ... The use of hand gestures can be the most intuitive human-machine interaction medium.The early approaches for hand gesture recognition used device-based methods.These methods use mechanical or optical sensors attached to a glove or markers,which hinder the natural human-machine communication.On the other hand,vision-based methods are less restrictive and allow for a more spontaneous communication without the need of an intermediary between human and machine.Therefore,vision gesture recognition has been a popular area of research for the past thirty years.Hand gesture recognition finds its application in many areas,particularly the automotive industry where advanced automotive human-machine interface(HMI)designers are using gesture recognition to improve driver and vehicle safety.However,technology advances go beyond active/passive safety and into convenience and comfort.In this context,one of America’s big three automakers has partnered with the Centre of Pattern Analysis and Machine Intelligence(CPAMI)at the University of Waterloo to investigate expanding their product segment through machine learning to provide an increased driver convenience and comfort with the particular application of hand gesture recognition for autonomous car parking.The present paper leverages the state-of-the-art deep learning and optimization techniques to develop a vision-based multiview dynamic hand gesture recognizer for a self-parking system.We propose a 3D-CNN gesture model architecture that we train on a publicly available hand gesture database.We apply transfer learning methods to fine-tune the pre-trained gesture model on custom-made data,which significantly improves the proposed system performance in a real world environment.We adapt the architecture of end-to-end solution to expand the state-of-the-art video classifier from a single image as input(fed by monocular camera)to a Multiview 360 feed,offered by a six cameras module.Finally,we optimize the proposed solution to work on a limited resource embedded platform(Nvidia Jetson TX2)that is used by automakers for vehicle-based features,without sacrificing the accuracy robustness and real time functionality of the system. 展开更多
关键词 Deep Learning video classification Dynamic Hand Gesture Recognition Multiview Embedded Platform AUTOMOTIVE Vehicle Self-Parking
下载PDF
Automatic Detection of Nephrops Norvegicus Burrows from Underwater Imagery Using Deep Learning 被引量:1
3
作者 Atif Naseer Enrique Nava Baro +2 位作者 Sultan Daud Khan Yolanda Vila Jennifer Doyle 《Computers, Materials & Continua》 SCIE EI 2022年第3期5321-5344,共24页
The Norway lobster,Nephrops norvegicus,is one of the main commercial crustacean fisheries in Europe.The abundance of Nephrops norvegicus stocks is assessed based on identifying and counting the burrows where they live... The Norway lobster,Nephrops norvegicus,is one of the main commercial crustacean fisheries in Europe.The abundance of Nephrops norvegicus stocks is assessed based on identifying and counting the burrows where they live from underwater videos collected by camera systems mounted on sledges.The Spanish Oceanographic Institute(IEO)andMarine Institute Ireland(MIIreland)conducts annual underwater television surveys(UWTV)to estimate the total abundance of Nephrops within the specified area,with a coefficient of variation(CV)or relative standard error of less than 20%.Currently,the identification and counting of the Nephrops burrows are carried out manually by the marine experts.This is quite a time-consuming job.As a solution,we propose an automated system based on deep neural networks that automatically detects and counts the Nephrops burrows in video footage with high precision.The proposed system introduces a deep-learning-based automated way to identify and classify the Nephrops burrows.This research work uses the current state-of-the-art Faster RCNN models Inceptionv2 and MobileNetv2 for object detection and classification.We conduct experiments on two data sets,namely,the Smalls Nephrops survey(FU 22)and Cadiz Nephrops survey(FU 30),collected by Marine Institute Ireland and Spanish Oceanographic Institute,respectively.From the results,we observe that the Inception model achieved a higher precision and recall rate than theMobileNetmodel.The best mean Average Precision(mAP)recorded by the Inception model is 81.61%compared to MobileNet,which achieves the best mAP of 75.12%. 展开更多
关键词 Faster RCNN computer vision nephrops norvegicus nephrops norvegicus stock assessment underwater videos classification
下载PDF
Transfer Learning on Deep Neural Networks to Detect Pornography
4
作者 Saleh Albahli 《Computer Systems Science & Engineering》 SCIE EI 2022年第11期701-717,共17页
While the internet has a lot of positive impact on society,there are negative components.Accessible to everyone through online platforms,pornography is,inducing psychological and health related issues among people of ... While the internet has a lot of positive impact on society,there are negative components.Accessible to everyone through online platforms,pornography is,inducing psychological and health related issues among people of all ages.While a difficult task,detecting pornography can be the important step in determining the porn and adult content in a video.In this paper,an architecture is proposed which yielded high scores for both training and testing.This dataset was produced from 190 videos,yielding more than 19 h of videos.The main sources for the content were from YouTube,movies,torrent,and websites that hosts both pornographic and non-pornographic contents.The videos were from different ethnicities and skin color which ensures the models can detect any kind of video.A VGG16,Inception V3 and Resnet 50 models were initially trained to detect these pornographic images but failed to achieve a high testing accuracy with accuracies of 0.49,0.49 and 0.78 respectively.Finally,utilizing transfer learning,a convolutional neural network was designed and yielded an accuracy of 0.98. 展开更多
关键词 Pornographic video detection classification convolutional neural network InceptionV3 Resnet50 VGG16
下载PDF
STRNet:Triple-stream Spatiotemporal Relation Network for Action Recognition 被引量:1
5
作者 Zhi-Wei Xu Xiao-Jun Wu Josef Kittler 《International Journal of Automation and computing》 EI CSCD 2021年第5期718-730,共13页
Learning comprehensive spatiotemporal features is crucial for human action recognition. Existing methods tend to model the spatiotemporal feature blocks in an integrate-separate-integrate form, such as appearance-and-... Learning comprehensive spatiotemporal features is crucial for human action recognition. Existing methods tend to model the spatiotemporal feature blocks in an integrate-separate-integrate form, such as appearance-and-relation network(ARTNet) and spatiotemporal and motion network(STM). However, with blocks stacking up, the rear part of the network has poor interpretability. To avoid this problem, we propose a novel architecture called spatial temporal relation network(STRNet), which can learn explicit information of appearance, motion and especially the temporal relation information. Specifically, our STRNet is constructed by three branches,which separates the features into 1) appearance pathway, to obtain spatial semantics, 2) motion pathway, to reinforce the spatiotemporal feature representation, and 3) relation pathway, to focus on capturing temporal relation details of successive frames and to explore long-term representation dependency. In addition, our STRNet does not just simply merge the multi-branch information, but we apply a flexible and effective strategy to fuse the complementary information from multiple pathways. We evaluate our network on four major action recognition benchmarks: Kinetics-400, UCF-101, HMDB-51, and Something-Something v1, demonstrating that the performance of our STRNet achieves the state-of-the-art result on the UCF-101 and HMDB-51 datasets, as well as a comparable accuracy with the state-of-the-art method on Something-Something v1 and Kinetics-400. 展开更多
关键词 Action recognition spatiotemporal relation multi-branch fusion long-term representation video classification
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部