期刊文献+
共找到122篇文章
< 1 2 7 >
每页显示 20 50 100
Quantitative analysis of the morphing wing mechanism of raptors:IMMU-based motion capture system and its application on gestures of a Falco peregrinus
1
作者 唐迪 朱力文 +7 位作者 施文熙 刘大伟 杨茵 姚国荣 严森祥 范忠勇 陆祎玮 王思宇 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第1期734-742,共9页
This paper presented a novel tinny motion capture system for measuring bird posture based on inertial and magnetic measurement units that are made up of micromachined gyroscopes, accelerometers, and magnetometers. Mul... This paper presented a novel tinny motion capture system for measuring bird posture based on inertial and magnetic measurement units that are made up of micromachined gyroscopes, accelerometers, and magnetometers. Multiple quaternion-based extended Kalman filters were implemented to estimate the absolute orientations to achieve high accuracy.Under the guidance of ornithology experts, the extending/contracting motions and flapping cycles were recorded using the developed motion capture system, and the orientation of each bone was also analyzed. The captured flapping gesture of the Falco peregrinus is crucial to the motion database of raptors as well as the bionic design. 展开更多
关键词 Falco peregrinus IMMU-based motion capture system flapping gesture
下载PDF
Implementation of natural hand gestures in holograms for 3D object manipulation
2
作者 Ajune Wanis ISMAIL Muhammad Akma IMAN 《Virtual Reality & Intelligent Hardware》 EI 2023年第5期439-450,共12页
Holograms provide a characteristic manner to display and convey information, and have been improved to provide better user interactions Holographic interactions are important as they improve user interactions with vir... Holograms provide a characteristic manner to display and convey information, and have been improved to provide better user interactions Holographic interactions are important as they improve user interactions with virtual objects. Gesture interaction is a recent research topic, as it allows users to use their bare hands to directly interact with the hologram. However, it remains unclear whether real hand gestures are well suited for hologram applications. Therefore, we discuss the development process and implementation of three-dimensional object manipulation using natural hand gestures in a hologram. We describe the design and development process for hologram applications and its integration with real hand gesture interactions as initial findings. Experimental results from Nasa TLX form are discussed. Based on the findings, we actualize the user interactions in the hologram. 展开更多
关键词 HOLOGRAM Gesture interaction Natural hand gesture Three-dimensional object manipulation Gesture recognition
下载PDF
On the Importance of Bodily Gestures,Facial Expressions,and Intonations to Thinking Expression and Interpretation
3
作者 石桐 蒋翃遐 《海外英语》 2021年第17期288-289,共2页
Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the cla... Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the clarity and density of expres-sion and involvement of listeners.Facial expressions whether or not correspond with exact thought could be"decoded"to influencethe extent of intelligibility of expression.Intonation can always reflect the mutual beliefs concerning the propositional content andstates of consciousness relating to the expression and interpretation.Therefore,these can considerably improve or abate the accura-cy of expression and interpretation of thought. 展开更多
关键词 Bodily gestures facial expressions intonations THOUGHT
下载PDF
Engagement Detection Based on Analyzing Micro Body Gestures Using 3D CNN
4
作者 Shoroog Khenkar Salma Kammoun Jarraya 《Computers, Materials & Continua》 SCIE EI 2022年第2期2655-2677,共23页
This paper proposes a novel,efficient and affordable approach to detect the students’engagement levels in an e-learning environment by using webcams.Our method analyzes spatiotemporal features of e-learners’micro bo... This paper proposes a novel,efficient and affordable approach to detect the students’engagement levels in an e-learning environment by using webcams.Our method analyzes spatiotemporal features of e-learners’micro body gestures,which will be mapped to emotions and appropriate engagement states.The proposed engagement detection model uses a three-dimensional convolutional neural network to analyze both temporal and spatial information across video frames.We follow a transfer learning approach by using the C3D model that was trained on the Sports-1M dataset.The adopted C3D model was used based on two different approaches;as a feature extractor with linear classifiers and a classifier after applying fine-tuning to the pretrained model.Our model was tested and its performance was evaluated and compared to the existing models.It proved its effectiveness and superiority over the other existing methods with an accuracy of 94%.The results of this work will contribute to the development of smart and interactive e-learning systems with adaptive responses based on users’engagement levels. 展开更多
关键词 Micro body gestures engagement detection 3D CNN transfer learning e-learning spatiotemporal features
下载PDF
Deep Reinforcement Learning Based Unmanned Aerial Vehicle (UAV) Control Using 3D Hand Gestures
5
作者 Fawad Salam Khan Mohd Norzali Haji Mohd +3 位作者 Saiful Azrin B.M.Zulkifli Ghulam E Mustafa Abro Suhail Kazi Dur Muhammad Soomro 《Computers, Materials & Continua》 SCIE EI 2022年第9期5741-5759,共19页
The evident change in the design of the autopilot system produced massive help for the aviation industry and it required frequent upgrades.Reinforcement learning delivers appropriate outcomes when considering a contin... The evident change in the design of the autopilot system produced massive help for the aviation industry and it required frequent upgrades.Reinforcement learning delivers appropriate outcomes when considering a continuous environment where the controlling Unmanned Aerial Vehicle(UAV)required maximum accuracy.In this paper,we designed a hybrid framework,which is based on Reinforcement Learning and Deep Learning where the traditional electronic flight controller is replaced by using 3D hand gestures.The algorithm is designed to take the input from 3D hand gestures and integrate with the Deep Deterministic Policy Gradient(DDPG)to receive the best reward and take actions according to 3D hand gestures input.The UAV consist of a Jetson Nano embedded testbed,Global Positioning System(GPS)sensor module,and Intel depth camera.The collision avoidance system based on the polar mask segmentation technique detects the obstacles and decides the best path according to the designed reward function.The analysis of the results has been observed providing best accuracy and computational time using novel design framework when compared with traditional Proportional Integral Derivatives(PID)flight controller.There are six reward functions estimated for 2500,5000,7500,and 10000 episodes of training,which have been normalized between 0 to−4000.The best observation has been captured on 2500 episodes where the rewards are calculated for maximum value.The achieved training accuracy of polar mask segmentation for collision avoidance is 86.36%. 展开更多
关键词 Deep reinforcement learning UAV 3D hand gestures obstacle detection polar mask
下载PDF
Robust Interactive Method for Hand Gestures Recognition Using Machine Learning 被引量:1
6
作者 Amal Abdullah Mohammed Alteaimi Mohamed Tahar Ben Othman 《Computers, Materials & Continua》 SCIE EI 2022年第7期577-595,共19页
The Hand Gestures Recognition(HGR)System can be employed to facilitate communication between humans and computers instead of using special input and output devices.These devices may complicate communication with compu... The Hand Gestures Recognition(HGR)System can be employed to facilitate communication between humans and computers instead of using special input and output devices.These devices may complicate communication with computers especially for people with disabilities.Hand gestures can be defined as a natural human-to-human communication method,which also can be used in human-computer interaction.Many researchers developed various techniques and methods that aimed to understand and recognize specific hand gestures by employing one or two machine learning algorithms with a reasonable accuracy.Thiswork aims to develop a powerful hand gesture recognition model with a 100%recognition rate.We proposed an ensemble classification model that combines the most powerful machine learning classifiers to obtain diversity and improve accuracy.The majority voting method was used to aggregate accuracies produced by each classifier and get the final classification result.Our model was trained using a self-constructed dataset containing 1600 images of ten different hand gestures.The employing of canny’s edge detector and histogram of oriented gradient method was a great combination with the ensemble classifier and the recognition rate.The experimental results had shown the robustness of our proposed model.Logistic Regression and Support Vector Machine have achieved 100%accuracy.The developed model was validated using two public datasets,and the findings have proved that our model outperformed other compared studies. 展开更多
关键词 Hand gesture recognition canny edge detector histogram of oriented gradient ensemble classifier majority voting
下载PDF
Hand Gestures Recognition Based on One-Channel Surface EMG Signal
7
作者 Junyi Cao Zhongming Tian Zhengtao Wang 《Journal of Software Engineering and Applications》 2019年第9期383-392,共10页
This paper presents an experiment using OPENBCI to collect data of two hand gestures and decoding the signal to distinguish gestures. The signal was extracted with three electrodes on the subiect’s forearm and transf... This paper presents an experiment using OPENBCI to collect data of two hand gestures and decoding the signal to distinguish gestures. The signal was extracted with three electrodes on the subiect’s forearm and transferred in one channel. After utilizing a Butterworth bandpass filter, we chose a novel way to detect gesture action segment. Instead of using moving average algorithm, which is based on the calculation of energy, We developed an algorithm based on the Hilbert transform to find a dynamic threshold and identified the action segment. Four features have been extracted from each activity section, generating feature vectors for classification. During the process of classification, we made a comparison between K-nearest-neighbors (KNN) and support vector machine (SVM), based on a relatively small amount of samples. Most common experiments are based on a large quantity of data to pursue a highly fitted model. But there are certain circumstances where we cannot obtain enough training data, so it makes the exploration of best method to do classification under small sample data imperative. Though KNN is known for its simplicity and practicability, it is a relatively time-consuming method. On the other hand, SVM has a better performance in terms of time requirement and recognition accuracy, due to its application of different Risk Minimization Principle. Experimental results show an average recognition rate for the SVM algorithm that is 1.25% higher than for KNN while SVM is 2.031 s shorter than that KNN. 展开更多
关键词 ELECTROMYOGRAPHY (EMG) GESTURE Recognition HILBERT Transform K-Nearest-Neighbors (KNN) Support Vector Machine (SVM)
下载PDF
Holographic Raman Tweezers Controlled by Hand Gestures and Voice Commands
8
作者 Zoltan Tomori Marian Antalik +5 位作者 Peter Kesa Jan Kanka Petr Jakl Mojmir Sery Silvie Bernatova Pavel Zemanek 《Optics and Photonics Journal》 2013年第2期331-336,共6页
Several attempts have appeared recently to control optical trapping systems via touch tablets and cameras instead of a mouse and joystick. Our approach is based on a modern low-cost hardware combined with fingertips a... Several attempts have appeared recently to control optical trapping systems via touch tablets and cameras instead of a mouse and joystick. Our approach is based on a modern low-cost hardware combined with fingertips and speech recognition software. Positions of operator's hands or fingertips control the positions of trapping beams in holographic optical tweezers that provide optical manipulation with microobjects. We tested and adapted two systems for hands position detection and gestures recognition – Creative Interactive Gesture Camera and Leap Motion. We further enhanced the system of Holographic Raman tweezers (HRT) by voice commands controlling the micropositioning stage and acquisition of Raman spectra. Interface communicates with HRT either directly by which requires adaptation of HRT firmware, or indirectly by simulating mouse and keyboard messages. Its utilization in real experiments speeded up the operator’s communication with the system cca. Two times in comparison with the traditional control by the mouse and the keyboard. 展开更多
关键词 HOLOGRAPHIC Optical TWEEZERS RAMAN TWEEZERS Natural USER Interface Leap Motion GESTURE CAMERA
下载PDF
Design of finger gestures for locomotion in virtual reality
9
作者 Rachel HUANG Carisa HARRIS-ADAMSON +1 位作者 Dan ODELL David REMPEL 《Virtual Reality & Intelligent Hardware》 2019年第1期1-9,共9页
Background Within a virtual environment(VE)the control of locomotion(e.g.,self-travel)is critical for creating a realistic and functional experience.Usually the direction of locomotion,whileusing a head-mounted displa... Background Within a virtual environment(VE)the control of locomotion(e.g.,self-travel)is critical for creating a realistic and functional experience.Usually the direction of locomotion,whileusing a head-mounted display(HMD),is determined by the direction the head is pointing and the forwardor backward motion is controlled with a hand held controllers.However,hand held devices can be difficultto use while the eyes are covered with a HMD.Free hand gestures,that are tracked with a camera or ahand data glove,have an advantage of eliminating the need to look at the hand controller but the design ofhand or finger gestures for this purpose has not been well developed.Methods This study used a depth-sensing camera to track fingertip location(curling and straightening the fingers),which was converted toforward or backward self-travel in the VE.Fingertip position was converted to self-travel velocity using amapping function with three parameters:a region of zero velocity(dead zone)around the relaxed handposition,a linear relationship of fingertip position to velocity(slope orβ)beginning at the edge of the deadzone,and an exponential relationship rather than a linear one mapping fingertip position to velocity(exponent).Using a HMD,participants moved forward along a virtual road and stopped at a target on theroad by controlling self-travel velocity with finger flexion and extension.Each of the 3 mapping functionparameters was tested at 3 levels.Outcomes measured included usability ratings,fatigue,nausea,and timeto complete the tasks.Results Twenty subjects participated but five did not complete the study due tonausea.The size of the dead zone had little effect on performance or usability.Subjects preferred lower β values which were associated with better subjective ratings of control and reduced time to complete thetask,especially for large targets.Exponent values of 1.0 or greater were preferred and reduced the time tocomplete the task,especially for small targets.Conclusions Small finger movements can be used tocontrol velocity of self-travel in VE.The functions used for converting fingertip position to movementvelocity influence usability and performance. 展开更多
关键词 Human computer interaction Virtual environment Gesture design
下载PDF
A laser ablated graphene-based flexible self-powered pressure sensor for human gestures and finger pulse monitoring 被引量:12
10
作者 Partha Sarati Das Ashok Chhetry +2 位作者 Pukar Maharjan M. Salauddin Rasel Jae Yeong Park 《Nano Research》 SCIE EI CAS CSCD 2019年第8期1789-1795,共7页
Flexible triboelectric nanogenerators (TENGs)-based pressure sensors are very essential for the wide-range applications, comprising wearable healthcare systems, intuitive human-device interfaces, electronic-skin (e-sk... Flexible triboelectric nanogenerators (TENGs)-based pressure sensors are very essential for the wide-range applications, comprising wearable healthcare systems, intuitive human-device interfaces, electronic-skin (e-skin), and artificial intelligence. Most of conventional fabrication methods used to produce high-performance TENGs involve plasma treatment, photolithography, printing, and electro-deposition. However, these fabrication techniques are expensive, multi-step, time-consuming and not suitable for mass production, which are the main barriers for efficient and cost-effective commercialization of TENGs. Here, we established a highly reliable scheme for the fabrication of a novel eco-friendly, low cost, and TENG-based pressure sensor (TEPS) designed for usage in self-powered-human gesture detection (SP-HGD) likewise wearable healthcare applications. The sensors with microstructured electrodes performed well with high sensitivity (7.697 kPa^-1), a lower limit of detection (~ 1 Pa), faster response time (< 9.9 ms), and highly stable over > 4,000 compression-releasing cycles. The proposed method is suitable for the adaptable fabrication of TEPS at an extremely low cost with possible applications in self-powered systems, especially e-skin and healthcare applications. 展开更多
关键词 FLEXIBLE laser ablated GRAPHENE SELF-POWERED triboelectric nanogenerator human gestures FINGER PULSE
原文传递
Dynamic Hand Gesture-Based Person Identification Using Leap Motion and Machine Learning Approaches
11
作者 Jungpil Shin Md.AlMehedi Hasan +2 位作者 Md.Maniruzzaman Taiki Watanabe Issei Jozume 《Computers, Materials & Continua》 SCIE EI 2024年第4期1205-1222,共18页
Person identification is one of the most vital tasks for network security. People are more concerned about theirsecurity due to traditional passwords becoming weaker or leaking in various attacks. In recent decades, f... Person identification is one of the most vital tasks for network security. People are more concerned about theirsecurity due to traditional passwords becoming weaker or leaking in various attacks. In recent decades, fingerprintsand faces have been widely used for person identification, which has the risk of information leakage as a resultof reproducing fingers or faces by taking a snapshot. Recently, people have focused on creating an identifiablepattern, which will not be reproducible falsely by capturing psychological and behavioral information of a personusing vision and sensor-based techniques. In existing studies, most of the researchers used very complex patternsin this direction, which need special training and attention to remember the patterns and failed to capturethe psychological and behavioral information of a person properly. To overcome these problems, this researchdevised a novel dynamic hand gesture-based person identification system using a Leap Motion sensor. Thisstudy developed two hand gesture-based pattern datasets for performing the experiments, which contained morethan 500 samples, collected from 25 subjects. Various static and dynamic features were extracted from the handgeometry. Randomforest was used to measure feature importance using the Gini Index. Finally, the support vectormachinewas implemented for person identification and evaluate its performance using identification accuracy. Theexperimental results showed that the proposed system produced an identification accuracy of 99.8% for arbitraryhand gesture-based patterns and 99.6% for the same dynamic hand gesture-based patterns. This result indicatedthat the proposed system can be used for person identification in the field of security. 展开更多
关键词 Person identification leap motion hand gesture random forest support vector machine
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
12
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
HgaNets:Fusion of Visual Data and Skeletal Heatmap for Human Gesture Action Recognition
13
作者 Wuyan Liang Xiaolong Xu 《Computers, Materials & Continua》 SCIE EI 2024年第4期1089-1103,共15页
Recognition of human gesture actions is a challenging issue due to the complex patterns in both visual andskeletal features. Existing gesture action recognition (GAR) methods typically analyze visual and skeletal data... Recognition of human gesture actions is a challenging issue due to the complex patterns in both visual andskeletal features. Existing gesture action recognition (GAR) methods typically analyze visual and skeletal data,failing to meet the demands of various scenarios. Furthermore, multi-modal approaches lack the versatility toefficiently process both uniformand disparate input patterns.Thus, in this paper, an attention-enhanced pseudo-3Dresidual model is proposed to address the GAR problem, called HgaNets. This model comprises two independentcomponents designed formodeling visual RGB (red, green and blue) images and 3Dskeletal heatmaps, respectively.More specifically, each component consists of two main parts: 1) a multi-dimensional attention module forcapturing important spatial, temporal and feature information in human gestures;2) a spatiotemporal convolutionmodule that utilizes pseudo-3D residual convolution to characterize spatiotemporal features of gestures. Then,the output weights of the two components are fused to generate the recognition results. Finally, we conductedexperiments on four datasets to assess the efficiency of the proposed model. The results show that the accuracy onfour datasets reaches 85.40%, 91.91%, 94.70%, and 95.30%, respectively, as well as the inference time is 0.54 s andthe parameters is 2.74M. These findings highlight that the proposed model outperforms other existing approachesin terms of recognition accuracy. 展开更多
关键词 Gesture action recognition multi-dimensional attention pseudo-3D skeletal heatmap
下载PDF
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
14
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese Sign Language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
Chemical simulation teaching system based on virtual reality and gesture interaction
15
作者 Dengzhen LU Hengyi LI +2 位作者 Boyu QIU Siyuan LIU Shuhan QI 《虚拟现实与智能硬件(中英文)》 EI 2024年第2期148-168,共21页
Background Most existing chemical experiment teaching systems lack solid immersive experiences,making it difficult to engage students.To address these challenges,we propose a chemical simulation teaching system based ... Background Most existing chemical experiment teaching systems lack solid immersive experiences,making it difficult to engage students.To address these challenges,we propose a chemical simulation teaching system based on virtual reality and gesture interaction.Methods The parameters of the models were obtained through actual investigation,whereby Blender and 3DS MAX were used to model and import these parameters into a physics engine.By establishing an interface for the physics engine,gesture interaction hardware,and virtual reality(VR)helmet,a highly realistic chemical experiment environment was created.Using code script logic,particle systems,as well as other systems,chemical phenomena were simulated.Furthermore,we created an online teaching platform using streaming media and databases to address the problems of distance teaching.Results The proposed system was evaluated against two mainstream products in the market.In the experiments,the proposed system outperformed the other products in terms of fidelity and practicality.Conclusions The proposed system which offers realistic simulations and practicability,can help improve the high school chemistry experimental education. 展开更多
关键词 Chemical experiment simulation Gesture interaction Virtual reality Model establishment Process control Streaming media DATABASE
下载PDF
Virtual Keyboard:A Real-Time Hand Gesture Recognition-Based Character Input System Using LSTM and Mediapipe Holistic
16
作者 Bijon Mallik Md Abdur Rahim +2 位作者 Abu Saleh Musa Miah Keun Soo Yun Jungpil Shin 《Computer Systems Science & Engineering》 2024年第2期555-570,共16页
In the digital age,non-touch communication technologies are reshaping human-device interactions and raising security concerns.A major challenge in current technology is the misinterpretation of gestures by sensors and... In the digital age,non-touch communication technologies are reshaping human-device interactions and raising security concerns.A major challenge in current technology is the misinterpretation of gestures by sensors and cameras,often caused by environmental factors.This issue has spurred the need for advanced data processing methods to achieve more accurate gesture recognition and predictions.Our study presents a novel virtual keyboard allowing character input via distinct hand gestures,focusing on two key aspects:hand gesture recognition and character input mechanisms.We developed a novel model with LSTM and fully connected layers for enhanced sequential data processing and hand gesture recognition.We also integrated CNN,max-pooling,and dropout layers for improved spatial feature extraction.This model architecture processes both temporal and spatial aspects of hand gestures,using LSTM to extract complex patterns from frame sequences for a comprehensive understanding of input data.Our unique dataset,essential for training the model,includes 1,662 landmarks from dynamic hand gestures,33 postures,and 468 face landmarks,all captured in real-time using advanced pose estimation.The model demonstrated high accuracy,achieving 98.52%in hand gesture recognition and over 97%in character input across different scenarios.Its excellent performance in real-time testing underlines its practicality and effectiveness,marking a significant advancement in enhancing human-device interactions in the digital age. 展开更多
关键词 Hand gesture recognition M.P.holistic open CV virtual keyboard LSTM human-computer interaction
下载PDF
Design and Implementation of Hand Gesture Detection System Using HM Model for Sign Language Recognition Development
17
作者 Sharmin Akter Milu Azmath Fathima +2 位作者 Tanmay Talukder Inzamamul Islam Md. Ismail Siddiqi Emon 《Journal of Data Analysis and Information Processing》 2024年第2期139-150,共12页
Gesture detection is the primary and most significant step for sign language detection and sign language is the communication medium for people with speaking and hearing disabilities. This paper presents a novel metho... Gesture detection is the primary and most significant step for sign language detection and sign language is the communication medium for people with speaking and hearing disabilities. This paper presents a novel method for dynamic hand gesture detection using Hidden Markov Models (HMMs) where we detect different English alphabet letters by tracing hand movements. The process involves skin color-based segmentation for hand isolation in video frames, followed by morphological operations to enhance image trajectories. Our system employs hand tracking and trajectory smoothing techniques, such as the Kalman filter, to monitor hand movements and refine gesture paths. Quantized sequences are then analyzed using the Baum-Welch Re-estimation Algorithm, an HMM-based approach. A maximum likelihood classifier is used to identify the most probable letter from the test sequences. Our method demonstrates significant improvements over traditional recognition techniques in real-time, automatic hand gesture recognition, particularly in its ability to distinguish complex gestures. The experimental results confirm the effectiveness of our approach in enhancing gesture-based sign language detection to alleviate the barrier between the deaf and hard-of-hearing community and general people. 展开更多
关键词 Hand Gesture Recognition System
下载PDF
Home Automation-Based Health Assessment Along Gesture Recognition via Inertial Sensors
18
作者 Hammad Rustam Muhammad Muneeb +4 位作者 Suliman A.Alsuhibany Yazeed Yasin Ghadi Tamara Al Shloul Ahmad Jalal Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2023年第4期2331-2346,共16页
Hand gesture recognition (HGR) is used in a numerous applications,including medical health-care, industrial purpose and sports detection.We have developed a real-time hand gesture recognition system using inertialsens... Hand gesture recognition (HGR) is used in a numerous applications,including medical health-care, industrial purpose and sports detection.We have developed a real-time hand gesture recognition system using inertialsensors for the smart home application. Developing such a model facilitatesthe medical health field (elders or disabled ones). Home automation has alsobeen proven to be a tremendous benefit for the elderly and disabled. Residentsare admitted to smart homes for comfort, luxury, improved quality of life,and protection against intrusion and burglars. This paper proposes a novelsystem that uses principal component analysis, linear discrimination analysisfeature extraction, and random forest as a classifier to improveHGRaccuracy.We have achieved an accuracy of 94% over the publicly benchmarked HGRdataset. The proposed system can be used to detect hand gestures in thehealthcare industry as well as in the industrial and educational sectors. 展开更多
关键词 Genetic algorithm human locomotion activity recognition human–computer interaction human gestures recognition principal hand gestures recognition inertial sensors principal component analysis linear discriminant analysis stochastic neighbor embedding
下载PDF
Performance of Forearm FMG for Estimating Hand Gestures and Prosthetic Hand Control 被引量:2
19
作者 Nguon Ha Gaminda Pankaja Withanachchi Yimesker Yihun 《Journal of Bionic Engineering》 SCIE EI CSCD 2019年第1期88-98,共11页
This study is aimed at exploring the prediction of the various hand gestures based on Force Myography(FMG)signals generated through piezoelectric sensors banded around the forearm.In the study,the muscles extension an... This study is aimed at exploring the prediction of the various hand gestures based on Force Myography(FMG)signals generated through piezoelectric sensors banded around the forearm.In the study,the muscles extension and contraction during specific movements were mapped,interpreted,and a control algorithm has been established to allow predefined grips and individual finger movements.Decision Tree Learning(DTL)and Support Vector Machine(SVM)have been used for classification and model recognition.Both of these estimated models generated an averaged accuracy of more than 80.0%,for predicting grasping,pinching,left flexion,and wrist rotation.As the classification showed a distinct feature in the signal,a real-time control system based on the threshold value has been implemented in a prosthetic hand.The hand motion has also been recorded through Virtual Motion Glove(VMD)to establish dynamic relationship between the FMG data and the different hand gestures.The classification of the hand gestures based on FMG signal will provide a useful foundation for future research in the interfacing and utilization of medical devices. 展开更多
关键词 Force Myography(FMG) Surface Electromyography(sEMG) PROSTHETIC hand GESTURE predictions and CLASSIFICATIONS bionic robot
原文传递
Dynamic and combined gestures recognition based on multi-feature fusion in a complex environment 被引量:2
20
作者 Wang Liang Liu Guixi Duan Hongyan 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2015年第2期81-88,共8页
Gestures recognition is of great importance to intelligent human-computer interaction technology, but it is also very difficult to deal with, especially when the environment is quite complex. In this paper, the recogn... Gestures recognition is of great importance to intelligent human-computer interaction technology, but it is also very difficult to deal with, especially when the environment is quite complex. In this paper, the recognition algorithm of dynamic and combined gestures, which based on multi-feature fusion, is proposed. Firstly, in image segmentation stage, the algorithm extracts interested region of gestures in color and depth map by combining with the depth information. Then, to establish support vector machine (SVM) model for static hand gestures recognition, the algorithm fuses weighted Hu invariant moments of depth map into the Histogram of oriented gradients (HOG) of the color image. Finally, an hidden Markov model (HMM) toolbox supporting multi-dimensional continuous data input is adopted to do the training and recognition. Experimental results show that the proposed algorithm can not only overcome the influence of skin object, multi-object moving and hand gestures interference in the background, but also real-time and practical in Human-Computer interaction. 展开更多
关键词 gesture recognition a weighted Hu HOG SVM HMM
原文传递
上一页 1 2 7 下一页 到第
使用帮助 返回顶部