期刊文献+
共找到16篇文章
< 1 >
每页显示 20 50 100
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
1
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese sign language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
2
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
Active Appearance Model Based Hand Gesture Recognition 被引量:1
3
作者 滕晓龙 于威威 刘重庆 《Journal of Donghua University(English Edition)》 EI CAS 2005年第4期67-71,共5页
This paper addresses the application of hand gesture recognition in monocular image sequences using Active Appearance Model (AAM), For this work, the proposed algorithm is composed of constricting AAMs and fitting t... This paper addresses the application of hand gesture recognition in monocular image sequences using Active Appearance Model (AAM), For this work, the proposed algorithm is composed of constricting AAMs and fitting the models to the interest region. In training stage, according to the manual labeled feature points, the relative AAM is constructed and the corresponding average feature is obtained. In recognition stage, the interesting hand gesture region is firstly segmented by skin and movement cues. Secondly, the models are fitted to the image that includes the hand gesture, and the relative features are extracted. Thirdly, the classification is done by comparing the extracted features and average features. 30 different gestures of Chinese sign language are applied for testing the effectiveness of the method. The Experimental results are given indicating good performance of the algorithm. 展开更多
关键词 human-machine interaction hand gesture recognition AAM sign language.
下载PDF
Hand Gesture Recognition Approach for ASL Language Using Hand Extraction Algorithm
4
作者 Alhussain Akoum Nour Al Mawla 《Journal of Software Engineering and Applications》 2015年第8期419-430,共12页
In a general overview, signed language is a technique used for communicational purposes by deaf people. It is a three-dimensional language that relies on visual gestures and moving hand signs that classify letters and... In a general overview, signed language is a technique used for communicational purposes by deaf people. It is a three-dimensional language that relies on visual gestures and moving hand signs that classify letters and words. Gesture recognition has been always a relatively fearful subject that is adherent to the individual on both academic and demonstrative levels. The core objective of this system is to produce a method which can identify detailed humanoid nods and use them to either deliver ones thoughts and feelings, or for device control. This system will stand as an effective replacement for speech, enhancing the individual’s ability to express and intermingle in society. In this paper, we will discuss the different steps used to input, recognize and analyze the hand gestures, transforming them to both written words and audible speech. Each step is an independent algorithm that has its unique variables and conditions. 展开更多
关键词 hand gesture AMERICAN sign language gesture Analysis Edge Detection Correlation Background Modeling
下载PDF
An Efficient Framework for Indian Sign Language Recognition Using Wavelet Transform
5
作者 Mathavan Suresh Anand Nagarajan Mohan Kumar Angappan Kumaresan 《Circuits and Systems》 2016年第8期1874-1883,共10页
Hand gesture recognition system is considered as a way for more intuitive and proficient human computer interaction tool. The range of applications includes virtual prototyping, sign language analysis and medical trai... Hand gesture recognition system is considered as a way for more intuitive and proficient human computer interaction tool. The range of applications includes virtual prototyping, sign language analysis and medical training. In this paper, an efficient Indian Sign Language Recognition System (ISLR) is proposed for deaf and dump people using hand gesture images. The proposed ISLR system is considered as a pattern recognition technique that has two important modules: feature extraction and classification. The joint use of Discrete Wavelet Transform (DWT) based feature extraction and nearest neighbour classifier is used to recognize the sign language. The experimental results show that the proposed hand gesture recognition system achieves maximum 99.23% classification accuracy while using cosine distance classifier. 展开更多
关键词 hand gesture sign language recognition THRESHOLDING Wavelet Transform Nearest Neighbour Classifier
下载PDF
Intelligent Sign Multi-Language Real-Time Prediction System with Effective Data Preprocessing
6
作者 Doaa E. Elmatary Doaa M. Maher Areeg Tarek Ibrahim 《Journal of Computer and Communications》 2023年第10期120-134,共15页
A multidisciplinary approach for developing an intelligent sign multi-language recognition system to greatly enhance deaf-mute communication will be discussed and implemented. This involves designing a low-cost glove-... A multidisciplinary approach for developing an intelligent sign multi-language recognition system to greatly enhance deaf-mute communication will be discussed and implemented. This involves designing a low-cost glove-based sensing system, collecting large and diverse datasets, preprocessing the data, and using efficient machine learning models. Furthermore, the glove is integrated with a user-friendly mobile application called “Life-sign” for this system. The main goal of this work is to minimize the processing time of machine learning classifiers while maintaining higher accuracy performance. This is achieved by using effective preprocessing algorithms to handle noisy and inconsistent data. Testing and iterating approaches have been applied to various classifiers to refine and improve their accuracy in the recognition process. Additionally, the Extra Trees (ET) classifier has been identified as the best algorithm, with results proving successful gesture prediction at an average accuracy of about 99.54%. A smart optimization feature has been implemented to control the size of data transferred via Bluetooth, allowing for fast recognition of consecutive gestures. Real-time performance has been measured through extensive experimental testing on various consecutive gestures, specifically referring to Arabic Sign Language (ArSL). The results have demonstrated that the system guarantees consecutive gesture recognition with a lower delay of 50 milliseconds. 展开更多
关键词 hand gesture Translator sign Multi-language Machine Learning Models Deaf-Mute Community
下载PDF
Sign Language Synthesis in Multi-function Perception Machine
7
作者 徐琳 高文 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 1997年第2期49-54,共6页
SignLanguageSynthesisinMulti┐functionPerceptionMachineXULinGAOWen(徐琳)(高文)(Dept.ofComputerScienceandEngineeri... SignLanguageSynthesisinMulti┐functionPerceptionMachineXULinGAOWen(徐琳)(高文)(Dept.ofComputerScienceandEngineering,HarbinInstitut... 展开更多
关键词 Human language body language sign language hand gesture MODEL sign language MODEL sign language SYNTHESIS
下载PDF
基于Leap Motion的手语识别算法优化 被引量:2
8
作者 杜淑颖 何望 《软件》 2023年第8期9-14,共6页
Leap Motion设备产生的数据在虚拟环境中可以进行手势识别。通过识别和跟踪用户的手来生成虚拟3D手部模型,从而获取手势信息。本文设计了一种基于隐马尔可夫模型(Hidden Markov Model,HMM)分类算法来学习从Leap Motion中所获取的手势信... Leap Motion设备产生的数据在虚拟环境中可以进行手势识别。通过识别和跟踪用户的手来生成虚拟3D手部模型,从而获取手势信息。本文设计了一种基于隐马尔可夫模型(Hidden Markov Model,HMM)分类算法来学习从Leap Motion中所获取的手势信息的系统,根据手势特征的重要性赋予不同权值,可进一步提高分类准确率,实现手语信息的识别输入。测试结果表明,识别准确率为86.1%,手语打字输入识别速度为每分钟13.09个字符,可显著提高聋哑人与正常人之间沟通的便捷性。 展开更多
关键词 Leap Motion 手势识别 隐马尔可夫模型 手语翻译
下载PDF
面向中等词汇量的中国手语视觉识别系统 被引量:11
9
作者 张良国 高文 +2 位作者 陈熙霖 陈益强 王春立 《计算机研究与发展》 EI CSCD 北大核心 2006年第3期476-482,共7页
手语识别的研究和实现具有重要的学术价值和广泛的应用前景.提出了基于混合元捆绑的隐马尔可夫模型(TMHMM)用于视觉手语识别.TMHMM的模型刻画精度接近于连续隐马尔可夫模型,因此能保证最终的识别率不会明显降低,同时通过混合元捆绑降低... 手语识别的研究和实现具有重要的学术价值和广泛的应用前景.提出了基于混合元捆绑的隐马尔可夫模型(TMHMM)用于视觉手语识别.TMHMM的模型刻画精度接近于连续隐马尔可夫模型,因此能保证最终的识别率不会明显降低,同时通过混合元捆绑降低计算成本,有效地提高识别速度.在特征提取方面,提出的层次型特征描述方案更加适合于中等或更大词汇量的手语识别.在此基础上,通过集成鲁棒的双手检测、背景去除和瞳孔检测等技术,实现了一个面向中等词汇量的中国手语视觉识别系统.实验结果表明,提出的方法能较好地实现常规背景中的中等词汇量的手语识别. 展开更多
关键词 手语识别 隐马尔可夫模型 人机交互 计算机视觉
下载PDF
应用计算机视觉的动态手势识别综述 被引量:11
10
作者 张国亮 王展妮 王田 《华侨大学学报(自然科学版)》 CAS 北大核心 2014年第6期653-658,共6页
从手势识别系统框架模型、手势分割、手势建模与分析和手势识别等几个方向,系统地综述当前计算机视觉动态手势识别技术的研究现状,分析其存在的不足,提出了进一步研究的问题.结果表明:基于简易可穿戴设备的手势识别、基于深度视觉传感... 从手势识别系统框架模型、手势分割、手势建模与分析和手势识别等几个方向,系统地综述当前计算机视觉动态手势识别技术的研究现状,分析其存在的不足,提出了进一步研究的问题.结果表明:基于简易可穿戴设备的手势识别、基于深度视觉传感器的手势识别和多方法交叉融合的手势识别将是未来该领域的发展趋势. 展开更多
关键词 人机交互 手势识别 计算机视觉 手势模型 隐马尔可夫模型
下载PDF
基于视觉的手势识别技术 被引量:24
11
作者 孙丽娟 张立材 郭彩龙 《计算机技术与发展》 2008年第10期214-216,221,共4页
近年来计算机已经成为人们日常生活的一部分,人们与计算机的交互也日益成为科研领域的热点。基于视觉的手势识别是实现新一代人机交互所不可缺少的一项关键技术,而手势识别的研究也可促进手语识别的发展,从而消除健全人与聋哑人之间的... 近年来计算机已经成为人们日常生活的一部分,人们与计算机的交互也日益成为科研领域的热点。基于视觉的手势识别是实现新一代人机交互所不可缺少的一项关键技术,而手势识别的研究也可促进手语识别的发展,从而消除健全人与聋哑人之间的交流障碍,使他们能获得健全人的正常生活,帮忙他们参加社会的各项活动。文中介绍了手势识别方法的发展、手势识别的技术难点,具体阐述了基于视觉的手势识别系统原理和组成,手势的建模以及在手势识别中常用的技术方法。 展开更多
关键词 人机交互 手势识别 手语识别 手势模型
下载PDF
一种中国手语单手词汇识别系统 被引量:5
12
作者 邹伟 原魁 +1 位作者 臧爱云 张海波 《系统仿真学报》 CAS CSCD 2003年第2期290-293,共4页
利用数据手套、视觉设备和肘部弯曲传感器作为输入设备,提出了一种基于特征匹配和信息融合的中国手语单手词汇识别系统。首先利用手姿信息和手势信息的类型判别来确定待识词所属类别,并选择相应的词库;然后根据所属类别将待识词和库中... 利用数据手套、视觉设备和肘部弯曲传感器作为输入设备,提出了一种基于特征匹配和信息融合的中国手语单手词汇识别系统。首先利用手姿信息和手势信息的类型判别来确定待识词所属类别,并选择相应的词库;然后根据所属类别将待识词和库中词的相应特征按一定的规则进行匹配;最后利用D-S证据理论将各特征的匹配结果进行融合,选择具有最大基本概率赋值的词汇作为输出。实验结果表明,该系统不但具有较好的实时性和灵活性,而且识别率比较高。 展开更多
关键词 中国手语单手词汇识别系统 手势信息 信息处理 手姿信息 特征匹配 手语识别 D-S证据理论 计算机
下载PDF
基于表观的动态孤立手势识别 被引量:15
13
作者 祝远新 徐光祐 黄浴 《软件学报》 EI CSCD 北大核心 2000年第1期54-61,共8页
给出一种基于表观的动态孤立手势识别技术 .借助于图像运动的变阶参数模型和鲁棒回归分析 ,提出一种基于运动分割的图像运动估计方法 .基于图像运动参数 ,构造了两种表观变化模型分别作为手势的表观特征 ,利用最大最小优化算法来创建手... 给出一种基于表观的动态孤立手势识别技术 .借助于图像运动的变阶参数模型和鲁棒回归分析 ,提出一种基于运动分割的图像运动估计方法 .基于图像运动参数 ,构造了两种表观变化模型分别作为手势的表观特征 ,利用最大最小优化算法来创建手势参考模板 ,并利用基于模板的分类技术进行识别 .对 12 0个手势样本所做的大量实验表明 ,这种动态孤立手势识别技术具有识别率高、计算量小、算法稳定性好等优点 . 展开更多
关键词 计算机视觉 手势识别 图像运动模型 表观
下载PDF
基于多尺度形状描述子的手势识别 被引量:3
14
作者 杨筱林 姚鸿勋 《计算机工程与应用》 CSCD 北大核心 2004年第32期76-78,105,共4页
随着计算机性能的提高和人机交互技术的发展,手势识别越来越受到人们的重视,尤其是基于视觉通道的手势识别,使人机交互变得更加便捷。但由于人手是复杂形变体,现有的方法对运动过程中手的形变的描述不够充分。该文从视觉角度提出了一种... 随着计算机性能的提高和人机交互技术的发展,手势识别越来越受到人们的重视,尤其是基于视觉通道的手势识别,使人机交互变得更加便捷。但由于人手是复杂形变体,现有的方法对运动过程中手的形变的描述不够充分。该文从视觉角度提出了一种新的手势建模的方法-多尺度形状描述子。从分析手的基本形状入手,利用了圆形的轴对称和中心对称的几何特点,具有旋转和尺度不变性。该描述子从多个尺度对手势进行形状描述,在一定程度上解决了手势的精细区分问题。 展开更多
关键词 视觉 手势建模 手势识别 手语
下载PDF
基于肤色检测技术的手势分割 被引量:7
15
作者 范保玲 王民 董颖娣 《计算机技术与发展》 2008年第3期105-108,共4页
手语是聋哑人使用的语言。在手势输入设备发展二十多年以后,许多人仍然发现在与聋哑人的互动交流中存在着一定的困难。因此应该努力使计算机适应自然的交流方式。文中的目的是利用肤色检测程序从整个图像上获取有意义的手势区域,并处理... 手语是聋哑人使用的语言。在手势输入设备发展二十多年以后,许多人仍然发现在与聋哑人的互动交流中存在着一定的困难。因此应该努力使计算机适应自然的交流方式。文中的目的是利用肤色检测程序从整个图像上获取有意义的手势区域,并处理不同室内背景和照明情况下的手势,为进一步的手语识别做准备。对几个静态手势的检测过程进行了实验评估,并就结果进行了讨论。 展开更多
关键词 肤色检测 手势分割 颜色模型 手语
下载PDF
驾驶人手机通话行为中基于图像特征决策融合的手势识别方法 被引量:6
16
作者 程文冬 马勇 魏庆媛 《交通运输工程学报》 EI CSCD 北大核心 2019年第4期171-181,共11页
为鲁棒检测自然环境中驾驶人的通话行为,提出了一种驾驶人手机通话手势的识别方法。运用Adaboost算法检测驾驶人面部区域,在YCgCr色彩空间中分别对面部肤色亮度分量和色度分量进行稀疏网格间隔采样,由此建立了肤色的高斯分布模型;针对... 为鲁棒检测自然环境中驾驶人的通话行为,提出了一种驾驶人手机通话手势的识别方法。运用Adaboost算法检测驾驶人面部区域,在YCgCr色彩空间中分别对面部肤色亮度分量和色度分量进行稀疏网格间隔采样,由此建立了肤色的高斯分布模型;针对驾驶室光照强度的不均匀性,提出了肤色分量的漂移补偿算法,建立了适应光照变化的在线肤色模型,以准确分割左右手部肤色区域;运用HOG算法获取手部肤色区域的2 376维HOG特征向量,运用PCA方法将HOG特征降至400维;同时提取手部肤色区域的PZMs特征,并采用Relief算法筛选出权重最大的8个PZMs特征向量,建立了融合PCA-HOG特征和Relief-PZMs特征的通话手势支持向量机分类决策。试验结果表明:基于PCA-HOG特征的手势识别率为93.1%,对光照变化的鲁棒性较好,但易受到手部与头部转动的干扰;基于Relief-PZMs特征的手势识别率为91.9%,对于头部与手部姿态的耐受度较好,但光照鲁棒性较差;基于PCA-HOG和Relief-PZMs多元特征融合方法的手势识别率达到94.5%,对光照波动、手部与头部转动等干扰条件具有较好的适应性。 展开更多
关键词 信息处理 通话手势识别 机器视觉 肤色模型 HOG特征 PZMs特征 决策融合
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部