期刊文献+
共找到1,132篇文章
< 1 2 57 >
每页显示 20 50 100
Recent Advances on Deep Learning for Sign Language Recognition
1
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
2
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese Sign language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
3
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 Chinese Sign language recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
Multi-scale context-aware network for continuous sign language recognition
4
作者 Senhua XUE Liqing GAO +1 位作者 Liang WAN Wei FENG 《虚拟现实与智能硬件(中英文)》 EI 2024年第4期323-337,共15页
The hands and face are the most important parts for expressing sign language morphemes in sign language videos.However,we find that existing Continuous Sign Language Recognition(CSLR)methods lack the mining of hand an... The hands and face are the most important parts for expressing sign language morphemes in sign language videos.However,we find that existing Continuous Sign Language Recognition(CSLR)methods lack the mining of hand and face information in visual backbones or use expensive and time-consuming external extractors to explore this information.In addition,the signs have different lengths,whereas previous CSLR methods typically use a fixed-length window to segment the video to capture sequential features and then perform global temporal modeling,which disturbs the perception of complete signs.In this study,we propose a Multi-Scale Context-Aware network(MSCA-Net)to solve the aforementioned problems.Our MSCA-Net contains two main modules:(1)Multi-Scale Motion Attention(MSMA),which uses the differences among frames to perceive information of the hands and face in multiple spatial scales,replacing the heavy feature extractors;and(2)Multi-Scale Temporal Modeling(MSTM),which explores crucial temporal information in the sign language video from different temporal scales.We conduct extensive experiments using three widely used sign language datasets,i.e.,RWTH-PHOENIX-Weather-2014,RWTH-PHOENIX-Weather-2014T,and CSL-Daily.The proposed MSCA-Net achieve state-of-the-art performance,demonstrating the effectiveness of our approach. 展开更多
关键词 Continuous sign language recognition Multi-scale motion attention Multi-scale temporal modeling
下载PDF
Continuous Sign Language Recognition Based on Spatial-Temporal Graph Attention Network 被引量:2
5
作者 Qi Guo Shujun Zhang Hui Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第3期1653-1670,共18页
Continuous sign language recognition(CSLR)is challenging due to the complexity of video background,hand gesture variability,and temporal modeling difficulties.This work proposes a CSLR method based on a spatialtempora... Continuous sign language recognition(CSLR)is challenging due to the complexity of video background,hand gesture variability,and temporal modeling difficulties.This work proposes a CSLR method based on a spatialtemporal graph attention network to focus on essential features of video series.The method considers local details of sign language movements by taking the information on joints and bones as inputs and constructing a spatialtemporal graph to reflect inter-frame relevance and physical connections between nodes.The graph-based multihead attention mechanism is utilized with adjacent matrix calculation for better local-feature exploration,and short-term motion correlation modeling is completed via a temporal convolutional network.We adopted BLSTM to learn the long-termdependence and connectionist temporal classification to align the word-level sequences.The proposed method achieves competitive results regarding word error rates(1.59%)on the Chinese Sign Language dataset and the mean Jaccard Index(65.78%)on the ChaLearn LAP Continuous Gesture Dataset. 展开更多
关键词 Continuous sign language recognition graph attention network bidirectional long short-term memory connectionist temporal classification
下载PDF
Joint On-Demand Pruning and Online Distillation in Automatic Speech Recognition Language Model Optimization
6
作者 Soonshin Seo Ji-Hwan Kim 《Computers, Materials & Continua》 SCIE EI 2023年第12期2833-2856,共24页
Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these... Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these systems,it is important to deploy efficient models capable of adapting to diverse deployment conditions.In recent years,on-demand pruning methods have obtained significant attention within the ASR domain due to their adaptability in various deployment scenarios.However,these methods often confront substantial trade-offs,particularly in terms of unstable accuracy when reducing the model size.To address challenges,this study introduces two crucial empirical findings.Firstly,it proposes the incorporation of an online distillation mechanism during on-demand pruning training,which holds the promise of maintaining more consistent accuracy levels.Secondly,it proposes the utilization of the Mogrifier long short-term memory(LSTM)language model(LM),an advanced iteration of the conventional LSTM LM,as an effective alternative for pruning targets within the ASR framework.Through rigorous experimentation on the ASR system,employing the Mogrifier LSTM LM and training it using the suggested joint on-demand pruning and online distillation method,this study provides compelling evidence.The results exhibit that the proposed methods significantly outperform a benchmark model trained solely with on-demand pruning methods.Impressively,the proposed strategic configuration successfully reduces the parameter count by approximately 39%,all the while minimizing trade-offs. 展开更多
关键词 Automatic speech recognition neural language model Mogrifier long short-term memory PRUNING DISTILLATION efficient deployment OPTIMIZATION joint training
下载PDF
A Novel Action Transformer Network for Hybrid Multimodal Sign Language Recognition
7
作者 Sameena Javaid Safdar Rizvi 《Computers, Materials & Continua》 SCIE EI 2023年第1期523-537,共15页
Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body mo... Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body movements including head,facial expressions,eyes,shoulder shrugging,etc.Previously both gestures have been detected;identifying separately may have better accuracy,butmuch communicational information is lost.Aproper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others.Our novel proposed system contributes as Sign LanguageAction Transformer Network(SLATN),localizing hand,body,and facial gestures in video sequences.Here we are expending a Transformer-style structural design as a“base network”to extract features from a spatiotemporal domain.Themodel impulsively learns to track individual persons and their action context inmultiple frames.Furthermore,a“head network”emphasizes hand movement and facial expression simultaneously,which is often crucial to understanding sign language,using its attention mechanism for creating tight bounding boxes around classified gestures.The model’s work is later compared with the traditional identification methods of activity recognition.It not only works faster but achieves better accuracy as well.Themodel achieves overall 82.66%testing accuracy with a very considerable performance of computation with 94.13 Giga-Floating Point Operations per Second(G-FLOPS).Another contribution is a newly created dataset of Pakistan Sign Language forManual and Non-Manual(PkSLMNM)gestures. 展开更多
关键词 Sign language gesture recognition manual signs non-manual signs action transformer network
下载PDF
Deep Learning-Based Sign Language Recognition for Hearing and Speaking Impaired People
8
作者 Mrim M.Alnfiai 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1653-1669,共17页
Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The... Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The inter-action via Sign language becomes a fruitful means of communication for hearing and speech impaired persons.A Hand gesture recognition systemfinds helpful for deaf and dumb people by making use of human computer interface(HCI)and convolutional neural networks(CNN)for identifying the static indications of Indian Sign Language(ISL).This study introduces a shark smell optimization with deep learning based automated sign language recognition(SSODL-ASLR)model for hearing and speaking impaired people.The presented SSODL-ASLR technique majorly concentrates on the recognition and classification of sign lan-guage provided by deaf and dumb people.The presented SSODL-ASLR model encompasses a two stage process namely sign language detection and sign lan-guage classification.In thefirst stage,the Mask Region based Convolution Neural Network(Mask RCNN)model is exploited for sign language recognition.Sec-ondly,SSO algorithm with soft margin support vector machine(SM-SVM)model can be utilized for sign language classification.To assure the enhanced classifica-tion performance of the SSODL-ASLR model,a brief set of simulations was car-ried out.The extensive results portrayed the supremacy of the SSODL-ASLR model over other techniques. 展开更多
关键词 Sign language recognition deep learning shark smell optimization mask rcnn model disabled people
下载PDF
Deep Learning Approach for Hand Gesture Recognition:Applications in Deaf Communication and Healthcare
9
作者 Khursheed Aurangzeb Khalid Javeed +3 位作者 Musaed Alhussein Imad Rida Syed Irtaza Haider Anubha Parashar 《Computers, Materials & Continua》 SCIE EI 2024年第1期127-144,共18页
Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seaml... Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics. 展开更多
关键词 Computer vision deep learning gait recognition sign language recognition machine learning
下载PDF
RoBGP:A Chinese Nested Biomedical Named Entity Recognition Model Based on RoBERTa and Global Pointer
10
作者 Xiaohui Cui Chao Song +4 位作者 Dongmei Li Xiaolong Qu Jiao Long Yu Yang Hanchao Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第3期3603-3618,共16页
Named Entity Recognition(NER)stands as a fundamental task within the field of biomedical text mining,aiming to extract specific types of entities such as genes,proteins,and diseases from complex biomedical texts and c... Named Entity Recognition(NER)stands as a fundamental task within the field of biomedical text mining,aiming to extract specific types of entities such as genes,proteins,and diseases from complex biomedical texts and categorize them into predefined entity types.This process can provide basic support for the automatic construction of knowledge bases.In contrast to general texts,biomedical texts frequently contain numerous nested entities and local dependencies among these entities,presenting significant challenges to prevailing NER models.To address these issues,we propose a novel Chinese nested biomedical NER model based on RoBERTa and Global Pointer(RoBGP).Our model initially utilizes the RoBERTa-wwm-ext-large pretrained language model to dynamically generate word-level initial vectors.It then incorporates a Bidirectional Long Short-Term Memory network for capturing bidirectional semantic information,effectively addressing the issue of long-distance dependencies.Furthermore,the Global Pointer model is employed to comprehensively recognize all nested entities in the text.We conduct extensive experiments on the Chinese medical dataset CMeEE and the results demonstrate the superior performance of RoBGP over several baseline models.This research confirms the effectiveness of RoBGP in Chinese biomedical NER,providing reliable technical support for biomedical information extraction and knowledge base construction. 展开更多
关键词 BIOMEDICINE knowledge base named entity recognition pretrained language model global pointer
下载PDF
Generating Factual Text via Entailment Recognition Task
11
作者 Jinqiao Dai Pengsen Cheng Jiayong Liu 《Computers, Materials & Continua》 SCIE EI 2024年第7期547-565,共19页
Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.Ho... Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance. 展开更多
关键词 Text generation entailment recognition task natural language processing artificial intelligence
下载PDF
Fireworks Optimization with Deep Learning-Based Arabic Handwritten Characters Recognition Model
12
作者 Abdelwahed Motwakel Badriyya B.Al-onazi +5 位作者 Jaber S.Alzahrani Ayman Yafoz Mahmoud Othman Abu Sarwar Zamani Ishfaq Yaseen Amgad Atta Abdelmageed 《Computer Systems Science & Engineering》 2024年第5期1387-1403,共17页
Handwritten character recognition becomes one of the challenging research matters.More studies were presented for recognizing letters of various languages.The availability of Arabic handwritten characters databases wa... Handwritten character recognition becomes one of the challenging research matters.More studies were presented for recognizing letters of various languages.The availability of Arabic handwritten characters databases was confined.Almost a quarter of a billion people worldwide write and speak Arabic.More historical books and files indicate a vital data set for many Arab nationswritten in Arabic.Recently,Arabic handwritten character recognition(AHCR)has grabbed the attention and has become a difficult topic for pattern recognition and computer vision(CV).Therefore,this study develops fireworks optimizationwith the deep learning-based AHCR(FWODL-AHCR)technique.Themajor intention of the FWODL-AHCR technique is to recognize the distinct handwritten characters in the Arabic language.It initially pre-processes the handwritten images to improve their quality of them.Then,the RetinaNet-based deep convolutional neural network is applied as a feature extractor to produce feature vectors.Next,the deep echo state network(DESN)model is utilized to classify handwritten characters.Finally,the FWO algorithm is exploited as a hyperparameter tuning strategy to boost recognition performance.Various simulations in series were performed to exhibit the enhanced performance of the FWODL-AHCR technique.The comparison study portrayed the supremacy of the FWODL-AHCR technique over other approaches,with 99.91%and 98.94%on Hijja and AHCD datasets,respectively. 展开更多
关键词 Arabic language handwritten character recognition deep learning CLASSIFICATION parameter tuning
下载PDF
Short-Term Memory Capacity across Time and Language Estimated from Ancient and Modern Literary Texts. Study-Case: New Testament Translations
13
作者 Emilio Matricciani 《Open Journal of Statistics》 2023年第3期379-403,共25页
We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any... We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools. 展开更多
关键词 Alphabetical languages Artificial Intelligence Writing GREEK LATIN New Testament Readers Overlap Probability short-term Memory Capacity TEXTS Translation Words Interval
下载PDF
Infant Cry Language Analysis and Recognition:An Experimental Approach 被引量:2
14
作者 Lichuan Liu Wei Li +1 位作者 Xianwen Wu Benjamin X.Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2019年第3期778-788,共11页
Recently, lots of research has been directed towards natural language processing. However, the baby's cry, which serves as the primary means of communication for infants, has not yet been extensively explored, bec... Recently, lots of research has been directed towards natural language processing. However, the baby's cry, which serves as the primary means of communication for infants, has not yet been extensively explored, because it is not a language that can be easily understood. Since cry signals carry information about a babies' wellbeing and can be understood by experienced parents and experts to an extent, recognition and analysis of an infant's cry is not only possible, but also has profound medical and societal applications. In this paper, we obtain and analyze audio features of infant cry signals in time and frequency domains.Based on the related features, we can classify given cry signals to specific cry meanings for cry language recognition. Features extracted from audio feature space include linear predictive coding(LPC), linear predictive cepstral coefficients(LPCC),Bark frequency cepstral coefficients(BFCC), and Mel frequency cepstral coefficients(MFCC). Compressed sensing technique was used for classification and practical data were used to design and verify the proposed approaches. Experiments show that the proposed infant cry recognition approaches offer accurate and promising results. 展开更多
关键词 Compressed sensing FEATURE extraction INFANT CRY signal language recognition
下载PDF
Recognition of Arabic Sign Language (ArSL) Using Recurrent Neural Networks 被引量:1
15
作者 Manar Maraqa Farid Al-Zboun +1 位作者 Mufleh Dhyabat Raed Abu Zitar 《Journal of Intelligent Learning Systems and Applications》 2012年第1期41-52,共12页
The objective of this research is to introduce the use of different types of neural networks in human hand gesture recognition for static images as well as for dynamic gestures. This work focuses on the ability of neu... The objective of this research is to introduce the use of different types of neural networks in human hand gesture recognition for static images as well as for dynamic gestures. This work focuses on the ability of neural networks to assist in Arabic Sign Language (ArSL) hand gesture recognition. We have presented the use of feedforward neural networks and recurrent neural networks along with its different architectures;partially and fully recurrent networks. Then we have tested our proposed system;the results of the experiment have showed that the suggested system with the fully recurrent architecture has had a performance with an accuracy rate 95% for static gesture recognition. 展开更多
关键词 ARABIC Sign language FEEDFORWARD NEURAL NETWORKS Recurrent NEURAL NETWORKS GESTURE recognition
下载PDF
Research on the Automatic Pattem Abstraction and Recognition Methodology for Large-scale Database System based on Natural Language Processing 被引量:1
16
作者 RongWang Cuizhen Jiao Wenhua Dai 《International Journal of Technology Management》 2015年第9期125-127,共3页
In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connec... In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method. 展开更多
关键词 Pattern Abstraction and recognition Database System Natural language Processing.
下载PDF
Automatic Mexican Sign Language Recognition Using Normalized Moments and Artificial Neural Networks 被引量:1
17
作者 Francisco Solís David Martínez Oscar Espinoza 《Engineering(科研)》 2016年第10期733-740,共8页
This document presents a computer vision system for the automatic recognition of Mexican Sign Language (MSL), based on normalized moments as invariant (to translation and scale transforms) descriptors, using artificia... This document presents a computer vision system for the automatic recognition of Mexican Sign Language (MSL), based on normalized moments as invariant (to translation and scale transforms) descriptors, using artificial neural networks as pattern recognition model. An experimental feature selection was performed to reduce computational costs due to this work focusing on automatic recognition. The computer vision system includes four LED-reflectors of 700 lumens each in order to improve image acquisition quality;this illumination system allows reducing shadows in each sign of the MSL. MSL contains 27 signs in total but 6 of them are expressed with movement;this paper presents a framework for the automatic recognition of 21 static signs of MSL. The proposed system achieved 93% of recognition rate. 展开更多
关键词 Mexican Sign language Automatic Sign language recognition Normalized Moments Computer Vision System
下载PDF
Automated Handwriting Recognition and Speech Synthesizer for Indigenous Language Processing
18
作者 Bassam A.Y.Alqaralleh Fahad Aldhaban +1 位作者 Feras Mohammed A-Matarneh Esam A.AlQaralleh 《Computers, Materials & Continua》 SCIE EI 2022年第8期3913-3927,共15页
In recent years,researchers in handwriting recognition analysis relating to indigenous languages have gained significant internet among research communities.The recent developments of artificial intelligence(AI),natur... In recent years,researchers in handwriting recognition analysis relating to indigenous languages have gained significant internet among research communities.The recent developments of artificial intelligence(AI),natural language processing(NLP),and computational linguistics(CL)find useful in the analysis of regional low resource languages.Automatic lexical task participation might be elaborated to various applications in the NLP.It is apparent from the availability of effective machine recognition models and open access handwritten databases.Arabic language is a commonly spoken Semitic language,and it is written with the cursive Arabic alphabet from right to left.Arabic handwritten Character Recognition(HCR)is a crucial process in optical character recognition.In this view,this paper presents effective Computational linguistics with Deep Learning based Handwriting Recognition and Speech Synthesizer(CLDL-THRSS)for Indigenous Language.The presented CLDL-THRSS model involves two stages of operations namely automated handwriting recognition and speech recognition.Firstly,the automated handwriting recognition procedure involves preprocessing,segmentation,feature extraction,and classification.Also,the Capsule Network(CapsNet)based feature extractor is employed for the recognition of handwritten Arabic characters.For optimal hyperparameter tuning,the cuckoo search(CS)optimization technique was included to tune the parameters of the CapsNet method.Besides,deep neural network with hidden Markov model(DNN-HMM)model is employed for the automatic speech synthesizer.To validate the effective performance of the proposed CLDL-THRSS model,a detailed experimental validation process takes place and investigates the outcomes interms of different measures.The experimental outcomes denoted that the CLDL-THRSS technique has demonstrated the compared methods. 展开更多
关键词 Computational linguistics handwriting character recognition natural language processing indigenous language
下载PDF
Cross-Language Transfer Learning-based Lhasa-Tibetan Speech Recognition
19
作者 Zhijie Wang Yue Zhao +3 位作者 Licheng Wu Xiaojun Bi Zhuoma Dawa Qiang Ji 《Computers, Materials & Continua》 SCIE EI 2022年第10期629-639,共11页
As one of Chinese minority languages,Tibetan speech recognition technology was not researched upon as extensively as Chinese and English were until recently.This,along with the relatively small Tibetan corpus,has resu... As one of Chinese minority languages,Tibetan speech recognition technology was not researched upon as extensively as Chinese and English were until recently.This,along with the relatively small Tibetan corpus,has resulted in an unsatisfying performance of Tibetan speech recognition based on an end-to-end model.This paper aims to achieve an accurate Tibetan speech recognition using a small amount of Tibetan training data.We demonstrate effective methods of Tibetan end-to-end speech recognition via cross-language transfer learning from three aspects:modeling unit selection,transfer learning method,and source language selection.Experimental results show that the Chinese-Tibetan multi-language learning method using multilanguage character set as the modeling unit yields the best performance on Tibetan Character Error Rate(CER)at 27.3%,which is reduced by 26.1%compared to the language-specific model.And our method also achieves the 2.2%higher accuracy using less amount of data compared with the method using Tibetan multi-dialect transfer learning under the same model structure and data set. 展开更多
关键词 Cross-language transfer learning low-resource language modeling unit Tibetan speech recognition
下载PDF
Continuous Arabic Sign Language Recognition in User Dependent Mode
20
作者 K. Assaleh T. Shanableh +2 位作者 M. Fanaswala F. Amin H. Bajaj 《Journal of Intelligent Learning Systems and Applications》 2010年第1期19-27,共9页
Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. ... Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. In this paper we report the first continuous Arabic Sign Language by building on existing research in feature extraction and pattern recognition. The development of the presented work required collecting a continuous Arabic Sign Language database which we designed and recorded in cooperation with a sign language expert. We intend to make the collected database available for the research community. Our system which we based on spatio-temporal feature extraction and hidden Markov models has resulted in an average word recognition rate of 94%, keeping in the mind the use of a high perplex-ity vocabulary and unrestrictive grammar. We compare our proposed work against existing sign language techniques based on accumulated image difference and motion estimation. The experimental results section shows that the pro-posed work outperforms existing solutions in terms of recognition accuracy. 展开更多
关键词 Pattern recognition Motion Analysis Image/ VIDEO Processing and SIGN language
下载PDF
上一页 1 2 57 下一页 到第
使用帮助 返回顶部