期刊文献+
共找到6,026篇文章
< 1 2 250 >
每页显示 20 50 100
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
1
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
2
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 Relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Potential use of large language models for mitigating students’problematic social media use:ChatGPT as an example
3
作者 Xin-Qiao Liu Zi-Ru Zhang 《World Journal of Psychiatry》 SCIE 2024年第3期334-341,共8页
The problematic use of social media has numerous negative impacts on individuals'daily lives,interpersonal relationships,physical and mental health,and more.Currently,there are few methods and tools to alleviate p... The problematic use of social media has numerous negative impacts on individuals'daily lives,interpersonal relationships,physical and mental health,and more.Currently,there are few methods and tools to alleviate problematic social media,and their potential is yet to be fully realized.Emerging large language models(LLMs)are becoming increasingly popular for providing information and assistance to people and are being applied in many aspects of life.In mitigating problematic social media use,LLMs such as ChatGPT can play a positive role by serving as conversational partners and outlets for users,providing personalized information and resources,monitoring and intervening in problematic social media use,and more.In this process,we should recognize both the enormous potential and endless possibilities of LLMs such as ChatGPT,leveraging their advantages to better address problematic social media use,while also acknowledging the limitations and potential pitfalls of ChatGPT technology,such as errors,limitations in issue resolution,privacy and security concerns,and potential overreliance.When we leverage the advantages of LLMs to address issues in social media usage,we must adopt a cautious and ethical approach,being vigilant of the potential adverse effects that LLMs may have in addressing problematic social media use to better harness technology to serve individuals and society. 展开更多
关键词 Problematic use of social media Social media Large language models ChatGPT Chatbots
下载PDF
Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications
4
作者 Harshvardhan Aditya Siddansh Chawla +6 位作者 Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期421-447,共27页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. 展开更多
关键词 Large language models PII Leakage Privacy Memorization OVERFITTING Membership Inference Attack (MIA)
下载PDF
Security Vulnerability Analyses of Large Language Models (LLMs) through Extension of the Common Vulnerability Scoring System (CVSS) Framework
5
作者 Alicia Biju Vishnupriya Ramesh Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期340-358,共19页
Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a... Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks. 展开更多
关键词 Common Vulnerability Scoring System (CVSS) Large language models (LLMs) DALL-E Prompt Injections Training Data Poisoning CVSS Metrics
下载PDF
Smaller & Smarter: Score-Driven Network Chaining of Smaller Language Models
6
作者 Gunika Dhingra Siddansh Chawla +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期23-42,共20页
With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily meas... With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study. 展开更多
关键词 Large language models (LLMs) Smaller language models (SLMs) FINANCE NETWORKING Supervisor model Scoring Function
下载PDF
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
7
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER Pre-trained language model
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
8
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 Chinese Sign language Recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
On the Teaching Design of Graduates’EAP Course:Enhancing Language Proficiency and Critical Thinking Skills
9
作者 LIU Yuan 《Sino-US English Teaching》 2024年第5期238-241,共4页
This paper explores the integration of the bridge-in,objectives,pre-assessment,participatory activities,post-assessment and summary(BOPPPS)teaching model within the context of the post-graduates Academic English cours... This paper explores the integration of the bridge-in,objectives,pre-assessment,participatory activities,post-assessment and summary(BOPPPS)teaching model within the context of the post-graduates Academic English course.It discusses how this structured approach can effectively enhance students’language proficiency,foster critical thinking skills,and align with the multifaceted objectives of advanced English language education.The study provides a detailed examination of each BOPPPS component as applied to the post-graduates Academic English curriculum,supported by theoretical underpinnings and practical implications. 展开更多
关键词 Academic English course BOPPPS teaching model language proficiency critical thinking active learning
下载PDF
Analysis of an event study using the Fama–French five‑factor model:teaching approaches including spreadsheets and the R programming language
10
作者 Monica Martinez‑Blasco Vanessa Serrano +1 位作者 Francesc Prior Jordi Cuadros 《Financial Innovation》 2023年第1期2042-2075,共34页
The current financial education framework has an increasing need to introduce tools that facilitate the application of theoretical models to real-world data and contexts.However,only a limited number of free tools are... The current financial education framework has an increasing need to introduce tools that facilitate the application of theoretical models to real-world data and contexts.However,only a limited number of free tools are available for this purpose.Given this lack of tools,the present study provides two approaches to facilitate the implementa-tion of an event study.The first approach consists of a set of MS Excel files based on the Fama–French five-factor model,which allows the application of the event study methodology in a semi-automatic manner.The second approach is an open-source R-programmed tool through which results can be obtained in the context of an event study without the need for programming knowledge.This tool widens the calculus possibilities provided by the first approach and offers the option to apply not only the Fama–French five-factor model but also other models that are common in the finan-cial literature.It is a user-friendly tool that enables reproducibility of the analysis and ensures that the calculations are free of manipulation errors.Both approaches are freely available and ready-to-use. 展开更多
关键词 Event study Fama–French five-factor model Financial education Teaching innovation SPREADSHEET R programming language
下载PDF
Joint On-Demand Pruning and Online Distillation in Automatic Speech Recognition Language Model Optimization
11
作者 Soonshin Seo Ji-Hwan Kim 《Computers, Materials & Continua》 SCIE EI 2023年第12期2833-2856,共24页
Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these... Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these systems,it is important to deploy efficient models capable of adapting to diverse deployment conditions.In recent years,on-demand pruning methods have obtained significant attention within the ASR domain due to their adaptability in various deployment scenarios.However,these methods often confront substantial trade-offs,particularly in terms of unstable accuracy when reducing the model size.To address challenges,this study introduces two crucial empirical findings.Firstly,it proposes the incorporation of an online distillation mechanism during on-demand pruning training,which holds the promise of maintaining more consistent accuracy levels.Secondly,it proposes the utilization of the Mogrifier long short-term memory(LSTM)language model(LM),an advanced iteration of the conventional LSTM LM,as an effective alternative for pruning targets within the ASR framework.Through rigorous experimentation on the ASR system,employing the Mogrifier LSTM LM and training it using the suggested joint on-demand pruning and online distillation method,this study provides compelling evidence.The results exhibit that the proposed methods significantly outperform a benchmark model trained solely with on-demand pruning methods.Impressively,the proposed strategic configuration successfully reduces the parameter count by approximately 39%,all the while minimizing trade-offs. 展开更多
关键词 Automatic speech recognition neural language model Mogrifier long short-term memory PRUNING DISTILLATION efficient deployment OPTIMIZATION joint training
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
12
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
The impact of ChatGPT on foreign language teaching and learning:Opportunities in education and research 被引量:5
13
作者 Wilson Cheong Hin Hong 《教育技术与创新》 2023年第1期37-45,共9页
The revolutionary online application ChatGPT has brought immense concerns to the education field.Foreign language teachers being some of those most reliant on writing assessments were among the most anxious,exacerbate... The revolutionary online application ChatGPT has brought immense concerns to the education field.Foreign language teachers being some of those most reliant on writing assessments were among the most anxious,exacerbated by the extensive media coverage about the much-fantasized functionality of the chatbot.Hence,the article starts by elucidating the mechanisms,functions and common misconceptions about ChatGPT.Issues and risks associated with its usage are discussed,followed by an in-depth discussion of how the chatbot can be harnessed by learners and teachers.It is argued that ChatGPT offers major opportunities for teachers and education institutes to improve second/foreign language teaching and assessments,which similarly provided researchers with an array of research opportunities,especially towards a more personalized learning experience. 展开更多
关键词 Large language model second language education flip classroom personalized learning formative assessment
下载PDF
基于Language Model的地理信息检索模型(英文) 被引量:3
14
作者 黎志升 王煦法 《中国科学技术大学学报》 CAS CSCD 北大核心 2010年第2期203-209,共7页
区别于传统的信息检索,地理信息检索通过一个查询范围词来限制用户的兴趣区域.目前的技术一般是把该查询范围词作为一个过滤器,将在该范围之外的文档排除在查询结果外.但是,词在地理空间的频率分布并不是均匀的,因此词在排序结果中的重... 区别于传统的信息检索,地理信息检索通过一个查询范围词来限制用户的兴趣区域.目前的技术一般是把该查询范围词作为一个过滤器,将在该范围之外的文档排除在查询结果外.但是,词在地理空间的频率分布并不是均匀的,因此词在排序结果中的重要性应该随着查询范围的变化而有所改变.为此,提出了一种新的基于语言模型的地理信息查询模型,把查询范围引入到传统的语言模型中.在该模型中,引入了一个local model来描述查询词的地理分布特性.实验结果表明,新的检索模型优于TF-IDF与传统的语言模型. 展开更多
关键词 语言模型 地理感知 地理 信息检索
下载PDF
An Empirical Study of Good-Turing Smoothing for Language Models on Different Size Corpora of Chinese 被引量:5
15
作者 Feng-Long Huang Ming-Shing Yu Chien-Yo Hwang 《Journal of Computer and Communications》 2013年第5期14-19,共6页
Data sparseness has been an inherited issue of statistical language models and smoothing method is usually used to resolve the zero count problems. In this paper, we studied empirically and analyzed the well-known smo... Data sparseness has been an inherited issue of statistical language models and smoothing method is usually used to resolve the zero count problems. In this paper, we studied empirically and analyzed the well-known smoothing methods of Good-Turing and advanced Good-Turing for language models on large sizes Chinese corpus. In the paper, ten models are generated sequentially on various size of corpus, from 30 M to 300 M Chinese words of CGW corpus. In our experiments, the smoothing methods;Good-Turing and Advanced Good-Turing smoothing are evaluated on inside testing and outside testing. Based on experiments results, we analyzed further the trends of perplexity of smoothing methods, which are useful for employing the effective smoothing methods to alleviate the issue of data sparseness on various sizes of language models. Finally, some helpful observations are described in detail. 展开更多
关键词 Good-Turing Methods SMOOTHING language models PERPLEXITY
下载PDF
Incorporating Linguistic Rules in Statistical Chinese Language Model for Pinyin-to-character Conversion 被引量:2
16
作者 刘秉权 Wang +2 位作者 Xiaolong Wang Yuying 《High Technology Letters》 EI CAS 2001年第2期8-13,共6页
An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods ... An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods such as MI-based rule evaluating, weighted rule quantification and element-based n-gram probability approximation are presented. Dynamic Viterbi algorithm is adopted to search the best path in lattice. To strengthen the model, transformation-based error-driven rules learning is adopted. Applying proposed model to Chinese Pinyin-to-character conversion, high performance has been achieved in accuracy, flexibility and robustness simultaneously. Tests show correct rate achieves 94.81% instead of 90.53% using bi-gram Markov model alone. Many long-distance dependency and recursion in language can be processed effectively. 展开更多
关键词 Chinese Pinyin-to-character conversion Rule-based language model N-gram language model Hybrid language model Element lattice Transformation-based error-driven learning
下载PDF
FIRST LANGUAGE ACQUISITION AS A MODEL FOR SECOND LANGUAGE ACQUISITION: THE TOTAL PHYSICAL RESPONSE APPROACH IN SECOND LANGUAGE LEARNING AND TEACHING 被引量:11
17
作者 井卫华 《外语与外语教学》 1988年第2期12-22,共11页
L1 and L2 acquisition, in some respects, are similar. Language development in children goes hand in hand with physical and cognitive development. Children learn their first language by imitation, but not always and no... L1 and L2 acquisition, in some respects, are similar. Language development in children goes hand in hand with physical and cognitive development. Children learn their first language by imitation, but not always and not only by imitation. There seems to be some "innate capacities" that make children start to speak at the same time they do and in the way they do it. Adults learning a second language usually are controlled more by their motivation. But language input is important for both L1 and L2 acquisition. Though there are differences between CL1 and between CL2 and AL2, the way in which these learners acquire some of the grammatical morphemes is similar. This, together with some other evidence, shows that it is not only children who can acquire language. Adults can also acquire a language. But when adults acquire a language, they should also learn it. Some of the ways in which children acquire their language can be used as a model for L2 acquisition, even for Chinese students whose language is unrelated to English and whose culture is different. Learning the culture of the English-speaking countries will benefit the learning of the language. Like children, listening should also be well in advance of speaking in L2 acquisition. To train listening comprehension skills, Asher’s TPR approach proves more effective. TPR approach is at the moment limited to the beginning stage only. In order for students to gain all the five skills in a second language learning, namely, listening, speaking, reading, writing, and interpreting/translating, other methods should be used at the same time, or at later stages. 展开更多
关键词 THE TOTAL PHYSICAL RESPONSE APPROACH IN SECOND language LEARNING AND TEACHING FIRST language ACQUISITION AS A model FOR SECOND language ACQUISITION TOTAL
下载PDF
Deep Learning-Based Sign Language Recognition for Hearing and Speaking Impaired People
18
作者 Mrim M.Alnfiai 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1653-1669,共17页
Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The... Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The inter-action via Sign language becomes a fruitful means of communication for hearing and speech impaired persons.A Hand gesture recognition systemfinds helpful for deaf and dumb people by making use of human computer interface(HCI)and convolutional neural networks(CNN)for identifying the static indications of Indian Sign Language(ISL).This study introduces a shark smell optimization with deep learning based automated sign language recognition(SSODL-ASLR)model for hearing and speaking impaired people.The presented SSODL-ASLR technique majorly concentrates on the recognition and classification of sign lan-guage provided by deaf and dumb people.The presented SSODL-ASLR model encompasses a two stage process namely sign language detection and sign lan-guage classification.In thefirst stage,the Mask Region based Convolution Neural Network(Mask RCNN)model is exploited for sign language recognition.Sec-ondly,SSO algorithm with soft margin support vector machine(SM-SVM)model can be utilized for sign language classification.To assure the enhanced classifica-tion performance of the SSODL-ASLR model,a brief set of simulations was car-ried out.The extensive results portrayed the supremacy of the SSODL-ASLR model over other techniques. 展开更多
关键词 Sign language recognition deep learning shark smell optimization mask rcnn model disabled people
下载PDF
Lightweight Behavior-Based Language for Requirements Modeling 被引量:2
19
作者 Zhengping Liang Guoqing Wu Li Wan 《Journal of Software Engineering and Applications》 2010年第3期245-254,共10页
Whether or not a software system satisfies the anticipated user requirements is ultimately determined by the behaviors of the software. So it is necessary and valuable to research requirements modeling language and te... Whether or not a software system satisfies the anticipated user requirements is ultimately determined by the behaviors of the software. So it is necessary and valuable to research requirements modeling language and technique from the perspective of behavior. This paper presents a lightweight behavior based requirements modeling language BDL with formal syntax and semantics, and a general-purpose requirements description model BRM synthesizing the concepts of viewpoint and scenario. BRM is good for modeling large and complex system due to its structure is very clear. In addition, the modeling process is demonstrated through the case study On-Line Campus Management System. By lightweight formal style, BDL & BRM can effectively bridge the gap between practicability and rigorousness of formal requirements modeling language and technique. 展开更多
关键词 BEHAVIOR DESCRIPTION language (BDL) SCENARIO VIEWPOINT BEHAVIOR Requirements model (BRM)
下载PDF
Word Embeddings and Semantic Spaces in Natural Language Processing
20
作者 Peter J. Worth 《International Journal of Intelligence Science》 2023年第1期1-21,共21页
One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ... One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP. 展开更多
关键词 Natural language Processing Vector Space models Semantic Spaces Word Embeddings Representation Learning Text Vectorization Machine Learning Deep Learning
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部