期刊文献+
共找到4,248篇文章
< 1 2 213 >
每页显示 20 50 100
Systematizing Teacher Development:A Review of Foreign Language Teacher Learning
1
作者 Guang ZENG 《Chinese Journal of Applied Linguistics》 2024年第3期518-523,526,共7页
Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Prof... Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations. 展开更多
关键词 foreign language teacher learning foreign language teacher education foreign language teaching teacher development
下载PDF
The Dynamic Interplay Between Language Motivation and English Speaking Fluency:Implications for Effective Teaching Strategies
2
作者 Yiran Yang 《Journal of Contemporary Educational Research》 2024年第7期56-62,共7页
In this study,we aim to investigate the reciprocal influence between language motivation and English speaking fluency among language learners,and to draw implications for effective teaching methodologies.By analyzing ... In this study,we aim to investigate the reciprocal influence between language motivation and English speaking fluency among language learners,and to draw implications for effective teaching methodologies.By analyzing multiple cases of language learners in conjunction with relevant theories and practical insights,the study uncovers a dynamic correlation between language motivation and speaking fluency.The research findings indicate that heightened language motivation can positively impact learners’speaking fluency,while improved oral skills,in turn,bolster learners’language confidence and motivation.Building on these insights,the study proposes impactful teaching approaches,such as cultivating learners’enthusiasm for language acquisition,providing diverse opportunities for oral practice,and fostering active engagement in language communication.These strategies are designed to enhance language motivation and speaking fluency among learners,offering valuable guidance and reference for educators. 展开更多
关键词 language motivation English speaking fluency language learners Teaching methodologies Oral proficiency language confidence
下载PDF
A Comprehensive Study on Gender Language and Its Differences in China
3
作者 Suofeiya Fan 《Journal of Contemporary Educational Research》 2024年第7期187-191,共5页
The study of language and gender,especially the study of gender language differences involves many fields such as psychology,sociology,anthropology,language and literature,news media,education,and so on.Starting from ... The study of language and gender,especially the study of gender language differences involves many fields such as psychology,sociology,anthropology,language and literature,news media,education,and so on.Starting from the broad definition of gender language,this paper composes and reviews the research history of domestic gender language and its differences.Around the research history of domestic gender language,the research period is divided according to the timeline into germination,genesis,and growth.Divided by theme and content,the main content is the phenomenon of sexism in language;the second is the study of gender language style differences;the third is the root causes of sexism and verbal gender differences,i.e.,the construction of the corresponding theories;and the fourth is the discussion of the limitations of the study of gender language in foreign countries. 展开更多
关键词 language and gender Gender language differences language differences
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
4
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Enhancing Communication Accessibility:UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals
5
作者 Khushal Das Fazeel Abid +4 位作者 Jawad Rasheed Kamlish Tunc Asuroglu Shtwai Alsubai Safeeullah Soomro 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期689-711,共23页
Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still ... Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still lacking.Unlike other SLs,the visuals of the Urdu Language are different.This study presents a novel approach to translating Urdu sign language(UrSL)using the UrSL-CNN model,a convolutional neural network(CNN)architecture specifically designed for this purpose.Unlike existingworks that primarily focus on languageswith rich resources,this study addresses the challenge of translating a sign language with limited resources.We conducted experiments using two datasets containing 1500 and 78,000 images,employing a methodology comprising four modules:data collection,pre-processing,categorization,and prediction.To enhance prediction accuracy,each sign image was transformed into a greyscale image and underwent noise filtering.Comparative analysis with machine learning baseline methods(support vectormachine,GaussianNaive Bayes,randomforest,and k-nearest neighbors’algorithm)on the UrSL alphabets dataset demonstrated the superiority of UrSL-CNN,achieving an accuracy of 0.95.Additionally,our model exhibited superior performance in Precision,Recall,and F1-score evaluations.This work not only contributes to advancing sign language translation but also holds promise for improving communication accessibility for individuals with hearing impairments. 展开更多
关键词 Convolutional neural networks Pakistan sign language visual language
下载PDF
Plain language in the healthcare of Japan:a systematic review of“plain Japanese”
6
作者 Hatsune Kido Soichiro Saeki +5 位作者 Mayu Hiraiwa Masashi Yasunaga Rie Tomizawa Chika Honde Toshio Fukuoka Kaori Minamitani 《Global Health Journal》 2024年第3期113-118,共6页
Objective:Despite the decrease in the number of foreign visitors and residents in Japan due to the coronavirus disease 2019,a resurgence is remarkable from 2022.However,Japan's medical support system for foreign p... Objective:Despite the decrease in the number of foreign visitors and residents in Japan due to the coronavirus disease 2019,a resurgence is remarkable from 2022.However,Japan's medical support system for foreign patients,especially residents,is inadequate,with language barriers potentially causing health disparities.Comprehensive interpretation and translation services are challenging,but“plain Japanese”may be a viable alternative for foreign patients with basic Japanese language skills.This study explores the application and obstacles of plain Japanese in the medical sector.Methods:A literature review was performed across these databases:Web of Science,PubMed,Google Scholar,Scopus,CINAHL Plus,Springer Link and Ichushi-Web(Japanese medical literature).The search covered themes related to healthcare,care for foreign patients,and scholarly articles,and was conducted in July 2023.Results:The study incorporated five papers.Each paper emphasized the language barriers foreign residents in Japan face when accessing healthcare,highlighting the critical role and necessity of plain Japanese in medical environments.Most of the reports focused on the challenges of delivering medical care to foreign patients and the training of healthcare professionals in using plain Japanese for communication.Conclusion:The knowledge and application of plain Japanese among healthcare professionals are inadequate,and literature also remains scarce.With the increasing number of foreign residents in Japan,the establishment of a healthcare system that effectively uses plain Japanese is essential.However,plain Japanese may not be the optimal linguistic assistance in certain situations,thus it is imperative to encourage more research and reports on healthcare services using plain Japanese. 展开更多
关键词 Plain Japanese Easy Japanese Plain language Foreign residents Healthcareaccess language barriers Emigrants and immigrants
下载PDF
Unlocking the Potential:A Comprehensive Systematic Review of ChatGPT in Natural Language Processing Tasks
7
作者 Ebtesam Ahmad Alomari 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期43-85,共43页
As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in... As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues. 展开更多
关键词 Generative AI large languagemodel(LLM) natural language processing(NLP) ChatGPT GPT(generative pretraining transformer) GPT-4 sentiment analysis NER information extraction ANNOTATION text classification
下载PDF
Smaller & Smarter: Score-Driven Network Chaining of Smaller Language Models
8
作者 Gunika Dhingra Siddansh Chawla +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期23-42,共20页
With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily meas... With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study. 展开更多
关键词 Large language Models (LLMs) Smaller language Models (SLMs) FINANCE NETWORKING Supervisor Model Scoring Function
下载PDF
A Review of Research on Second Language Acquisition from a Positive Psychology Perspective
9
作者 Yanhui Wu 《Journal of Contemporary Educational Research》 2024年第6期189-193,共5页
This paper reviews the research on second language acquisition from the perspective of positive psychology.First,it introduces the background and purpose of the study and discusses the significance of the application ... This paper reviews the research on second language acquisition from the perspective of positive psychology.First,it introduces the background and purpose of the study and discusses the significance of the application of positive psychology in the field of language acquisition.Then,the basic theories of positive psychology,including the core concepts and principles of positive psychology,are summarized.Subsequently,the theory of second language acquisition is defined and outlined,including the definition,characteristics,and related developmental theories of second language acquisition.On this basis,the study of second language acquisition from the perspective of positive psychology is discussed in detail.By combing and synthesizing the literature,this paper summarizes the current situation and trend of second language acquisition research under the perspective of positive psychology and puts forward some future research directions and suggestions. 展开更多
关键词 Positive psychology Second language acquisition language learning motivation Positive emotion Positive mindset
下载PDF
Influencing Factors of Communicative Language Teaching:A Student Perspective from English Majors
10
作者 Xue Zhou 《Journal of Educational Theory and Management》 2024年第2期50-53,共4页
Nowadays,the Communicative Language Teaching Approach has gained significant popularity in the field of foreign language teaching.However,there appears to be a stagnation in its application effects.Therefore,this thes... Nowadays,the Communicative Language Teaching Approach has gained significant popularity in the field of foreign language teaching.However,there appears to be a stagnation in its application effects.Therefore,this thesis aims to investigate the present state of CLT implementation and identify the factors influencing its execution within English major classrooms at Chinese universities from a student perspective.30 students responded to the questionnaire and 5 students participated in interviews to provide detailed insights.Through analysis,it was observed that CLT has been widely used in English classes and received positive feedback from students.Factors including the Test-oriented Educational system,teacher factors,student factors,and the Chinese traditional Confucius idea about teaching have an important impact on its implementation.Additionally,this article offers potential recommendations aimed at reconciling the CLT Approach with the Chinese educational context. 展开更多
关键词 EFL Foreign language teaching Communicative language teaching CONFUCIANISM
下载PDF
Research on Foreign Language Talent Cultivation Mode Based on the Concept of Language Security
11
作者 Ying Qin 《Journal of Contemporary Educational Research》 2024年第4期150-156,共7页
Language security is an important part of national security,and in the current complex international environment,the issue of language security is becoming more and more prominent and has become an unignorable part of... Language security is an important part of national security,and in the current complex international environment,the issue of language security is becoming more and more prominent and has become an unignorable part of national security strategy.As the bridge and link of international communication,foreign language talents’language skills and the level of national security awareness directly affect our country’s international image and the effect of international communication.Therefore,the establishment of a foreign language talent cultivation mode based on the concept of language security is not only an important way to improve the quality of foreign language education but also a realistic need to safeguard national security and promote international exchanges.Starting from the influence of the language security concept on foreign language talent cultivation,this paper analyzes the foreign language talent cultivation mode based on the language security concept,with a view to providing new ideas and methods for the development of foreign language education. 展开更多
关键词 language security Foreign language talents Cultivation mode
下载PDF
NewBee: Context-Free Grammar (CFG) of a New Programming Language for Novice Programmers
12
作者 Muhammad Aasim Qureshi Muhammad Asif Saira Anwar 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期439-453,共15页
Learning programming and using programming languages are the essential aspects of computer science education.Students use programming languages to write their programs.These computer programs(students or practitioners... Learning programming and using programming languages are the essential aspects of computer science education.Students use programming languages to write their programs.These computer programs(students or practitioners written)make computers artificially intelligent and perform the tasks needed by the users.Without these programs,the computer may be visioned as a pointless machine.As the premise of writing programs is situated with specific programming languages,enormous efforts have been made to develop and create programming languages.However,each program-ming language is domain-specific and has its nuances,syntax and seman-tics,with specific pros and cons.These language-specific details,including syntax and semantics,are significant hurdles for novice programmers.Also,the instructors of introductory programming courses find these language specificities as the biggest hurdle in students learning,where more focus is on syntax than logic development and actual implementation of the program.Considering the conceptual difficulty of programming languages and novice students’struggles with the language syntax,this paper describes the design and development of a Context-Free Grammar(CFG)of a programming language for the novice,newcomers and students who do not have computer science as their major.Due to its syntax proximity to daily conversations,this paper hypothesizes that this language will be easy to use and understand by novice programmers.This paper systematically designed the language by identifying themes from various existing programming languages(e.g.,C,Python).Additionally,this paper surveyed computer science experts from industry and academia,where experts self-reported their satisfaction with the newly designed language.The results indicate that 93%of the experts reported satisfaction with the NewBee for novice,newcomer and non-Computer Sci-ence(CS)major students. 展开更多
关键词 Programming language formal language computer language language grammar simple syntax programming language novice programmer
下载PDF
A large language model-powered literature review for high-angle annular dark field imaging
13
作者 Wenhao Yuan Cheng Peng Qian He 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第9期76-81,共6页
High-angle annular dark field(HAADF)imaging in scanning transmission electron microscopy(STEM)has become an indispensable tool in materials science due to its ability to offer sub-°A resolution and provide chemic... High-angle annular dark field(HAADF)imaging in scanning transmission electron microscopy(STEM)has become an indispensable tool in materials science due to its ability to offer sub-°A resolution and provide chemical information through Z-contrast.This study leverages large language models(LLMs)to conduct a comprehensive bibliometric analysis of a large amount of HAADF-related literature(more than 41000 papers).By using LLMs,specifically ChatGPT,we were able to extract detailed information on applications,sample preparation methods,instruments used,and study conclusions.The findings highlight the capability of LLMs to provide a new perspective into HAADF imaging,underscoring its increasingly important role in materials science.Moreover,the rich information extracted from these publications can be harnessed to develop AI models that enhance the automation and intelligence of electron microscopes. 展开更多
关键词 LARGE language models high-angle ANNULAR DARK FIELD imaging deep learning
下载PDF
DeBERTa-GRU: Sentiment Analysis for Large Language Model
14
作者 Adel Assiri Abdu Gumaei +2 位作者 Faisal Mehmood Touqeer Abbas Sami Ullah 《Computers, Materials & Continua》 SCIE EI 2024年第6期4219-4236,共18页
Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whe... Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whether the sentiment of the text is positive,negative,neutral,or any other personal emotion to understand the sentiment context of the text.Sentiment analysis is essential in business and society because it impacts strategic decision-making.Sentiment analysis involves challenges due to lexical variation,an unlabeled dataset,and text distance correlations.The execution time increases due to the sequential processing of the sequence models.However,the calculation times for the Transformer models are reduced because of the parallel processing.This study uses a hybrid deep learning strategy to combine the strengths of the Transformer and Sequence models while ignoring their limitations.In particular,the proposed model integrates the Decoding-enhanced with Bidirectional Encoder Representations from Transformers(BERT)attention(DeBERTa)and the Gated Recurrent Unit(GRU)for sentiment analysis.Using the Decoding-enhanced BERT technique,the words are mapped into a compact,semantic word embedding space,and the Gated Recurrent Unit model can capture the distance contextual semantics correctly.The proposed hybrid model achieves F1-scores of 97%on the Twitter Large Language Model(LLM)dataset,which is much higher than the performance of new techniques. 展开更多
关键词 DeBERTa GRU Naive Bayes LSTM sentiment analysis large language model
下载PDF
Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English
15
作者 Ronghao Pan JoséAntonio García-Díaz Rafael Valencia-García 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2849-2868,共20页
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning... Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives. 展开更多
关键词 Hate speech detection zero-shot few-shot fine-tuning natural language processing
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
16
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
17
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese Sign language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
18
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 Relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
19
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
20
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 Chinese Sign language Recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
上一页 1 2 213 下一页 到第
使用帮助 返回顶部